Data Engineer Mumbai

Mumbai, India


A global leader in audience insights, data and analytics, Nielsen shapes the future of media with accurate measurement of what people listen to and watch.

View company page

At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future. 
We are looking for an experienced Data Engineer to join our team which provides a spectrum of services and expertise to all business verticals within Gracenote. This person will collaborate with other Data Engineers, DBAs, SQL/ETL Developers, DevOps Engineers, Security professionals and Data Science team members, to architect, build, and deploy the platform solutions on which our entertainment metadata pipelines thrive. 
Our team views diversity as a strength and we are looking for people who will help support an inclusive culture of belonging where everyone feels empowered to bring their full, authentic selves to work.
As a Data Engineer, your role is to own the data pipeline and the data governance of our Data Strategy.  Our Data Strategy underpins our suite of Client-facing Applications, Data Science activities, Operational Tools and Business Analytics.


  • Architect and build scalable, resilient and cost-effective software to support complex data pipelines.
  • The architecture has two facets: Storage and Compute. The Data Engineer is responsible for designing and maintaining the different tiers of the data storage, including (but not limited to) archival, long-term persistent storage, transactional and reporting storage.
  • The Data Engineer is responsible for designing, implementing and maintaining various data pipelines such as self-service ingestion tools, exports to application-specific warehouses and indexing activities.
  • The Data Engineer is responsible for data modeling, as well as designing, implementing and maintaining various data catalogs, to support data transformation and product requirements.
  • Collaborate with Data Science to understand, translate, and integrate methodologies into engineering build pipelines.
  • Partner with product owners to translate complex business requirements into technical solutions, imparting design and architecture guidance.
  • Provide expert mentorship to project teams on technology strategy, cultivating advanced skill sets in software engineering and modern SDLC.
  • Stay informed about the latest technologies and methodologies by participating in industry forums, having an active peer network, and engaging actively with customers.
  • Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through teamwork.


  • A degree in Computer Science or related technical field.
  • Strong Computer Science fundamentals3+ years of professional Database Development, with languages such as ANSI SQL, TSQL, PL/SQL, PLSQL, plus database design, normalization, server tuning, and query plan optimization3+ years Software Engineering experience with programming languages such as Java, Scala, Python and Unix Shell3+ years of professional DBA experience with large datastores including HA and DR planning and support.
  • Understanding of File Systems
  • Demonstrated understanding and experience with big data tools such as Kafka, Spark and Trino/Presto
  • Experience configuring database replication (physical and/or logical)ETL experience (3rd party and proprietary)Experience with orchestration tools such as Airflow
  • Comfortable with version control systems such as git
  • A thirst for learning new Tech and keeping up with industry advances.
  • Excellent communication and knowledge-sharing skills.
  • Comfortable working with technical and non-technical teams.
  • Strong debugging skills.
  • Comfortable providing and receiving code review feedback.
  • A positive attitude, adaptability, enthusiasm, and a growth mindset.

Nice to have:

  • A personal technical blog
  • A personal (Git) repository of side projects
  • Participation in an open-source community

Preferred skills:

  • Comfortable using Docker and Kubernetes for container management.
  • DevOps experience deploying and tuning the applications you’ve built.
  • Monitoring tools such as Datadog, Prometheus, Grafana, Cloudwatch.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Architecture Big Data Business Analytics Computer Science Data governance Data pipelines Data strategy DevOps Docker Engineering ETL Git Grafana Java Kafka Kubernetes Open Source Pipelines Python Scala SDLC Security Spark SQL T-SQL

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  0  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.