Principal Data Engineer

Pune

Job Title

Principal Data Engineer

This position is expected to lead a data engineering team in a large data, fast-paced environment.  The technology stack will be a broad spectrum of data platforms, tools, and orchestrators both in the cloud as well as in an on-premise data center.   This role defines, develops, drives, and maintains data/ETL architecture with respect to customer requirements and the company’s objectives. Proficient in technical knowledge to ensure the team performs at a high level. Is recognized as a leader in their own area and may formally train Specialists/Senior Specialists. The ability to form and lead special teams is required.  This candidate must have good communication skills and also have a desire to frequently work with others.  This position will also be expected to keep up with current technologies and methods.  Feedback is expected if better methods or technologies could improve our processes.   As a leader, the candidate will be responsible for designing and building programs to process data as well as executing a certain level of delegation.  Understand how main business drivers may impact on own area. Can assess complex problems with a broad impact on the activity, improve processes, and recommend solutions and risk mitigation plans.  Able to communicate complex information.  Must be able to research and decipher any format of data provided via ingestion.  As an expert and leader, this role will need to know how data will be used and assist application developers with retrieving data from our database platforms accurately and efficiently.  This role will need to manage any data processes and respond to any issues in the processes.  Attention to resource utilization (space, CPU, memory, network) will be expected.  This role will need to effectively communicate with team members, other teams, other business units, and leadership. Works with a high level of autonomy, based on management directions. Leads projects or contributes to broad cross-functional projects. May contribute to the planning of resources and budget.  Skills required are Big Data Management, SQL development, Database development, Python Development, ETL development, Oracle,  JSON, Unix scripting, MDM, Windows, Outlook, PowerPoint, Excel, and word.  Additional desired skills, MySQL, Kubernetes, DRUID, Airflow, Databricks, Spark, Couchbase, other NOSQL DB platforms, Stambia, Azure, Agile.

Technical requirements:

  • Cloud Experience (Azure preferred)
  • Experience migrating data to cloud data platforms
  • Experience with big data platforms in a cloud environment (Databricks Preferred)
  • Experience with UNIX
  • Event streaming and queuing familiarity (such as Kafka, Event Hub, or similar)
  • Python Coding for ETL and orchestration with emphasis on data
  • Experience with orchestration tools such as Airflow, Azure Data Factory, and/or Synapse Pipelines
  • Experience with a relational database such as Oracle/MySQL/Postgres/SQL Server
  • Experience with complex SQL coding and tuning
  • Problem identification, Solutioning, Tracking, and Communication

Position/Role Requirements

  • Must be able to lead a data engineering team and coordinate development efforts
  • Manages team resources, delegation, user story assignment, and backlog
  • Weekly report updates on user story burndown
  • Works with minimal supervision, manages own time effectively and maintains control over all current projects/responsibilities. Follows up on all relevant issues.
  • Ability to work effectively with and coordinate resources across multiple time zones
  • Good communication skills via teams, email, and conference calls
  • Ability to translate business and/or functional requirements to technical requirements
  • Confidence to provide solutions and make decisions independent of direct supervision
  • Ability to see the bigger picture while still being able to provide technical guidance and code reviews

Key Accountabilities:

Perform database integrity checks and updates. Create and maintain operational procedures. Ensure that external service providers carry out operational procedures according to SLAs. Write and update the database checklist and documentation. Analyze customer systems and identify/execute actions necessary to activate existing Amadeus products. Provide functional support to customers and ensure problem resolution for functionality, procedural, fallback, implementation, or general service issues.  Coordinate intervention from other groups (Product Management, Product Definition, Help Desk, Operations Erding, and Development ) when issues addressed are beyond the scope of team competencies. Update and prepare database correction requests. Monitor end-user satisfaction by collecting feedback on performance and problems.

Diversity & Inclusion

We are an Equal Opportunity Employer and seek to hire the best candidate regardless of age, beliefs, disability, ethnicity, gender or sexual orientation.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow Architecture Azure Big Data Databricks Data management Engineering ETL Excel JSON Kafka Kubernetes MySQL NoSQL Oracle Pipelines PostgreSQL Python RDBMS Research Spark SQL Streaming

Region: Asia/Pacific
Country: India
Job stats:  2  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.