Data Engineer - (Spark, Scala, Python, Cassandra, Elasticsearch, AWS, Airflow, SQL)

Bengaluru, India

Nielsen

A global leader in audience insights, data and analytics, Nielsen shapes the future of media with accurate measurement of what people listen to and watch.

View company page

At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future. 

Responsibilities

  • Work closely with team leads and backend developers to design and develop functional, robust pipelines to support internal and customer needs
  • Write both unit and integration tests, and develop automation tools for daily tasks
  • Develop high quality, well documented, and efficient code 
  • Manage and optimize scalable pipelines in the cloud
  • Optimize internal and external applications for performance and scalability
  • Develop automated tests to ensure business needs are met, and write unit, integration, or data quality tests
  • Communicate regularly with stakeholders, project managers, quality assurance teams, and other developers regarding progress on long-term technology roadmap
  • Recommend systems solutions by comparing advantages and disadvantages of custom development and purchased alternatives

Key Skills

  • Domain Expertise
  • 2+ years of experience as a software/data engineer 
  • Bachelor’s degree in Computer Science, MIS, or Engineering

  • Technical Skills
  • Experience in software development using programming languages & tools/services: Java or Scala, Big Data, Hadoop, Spark, Spark SQL, Presto \ Hive, Cloud (preferably AWS), Docker, RDBMS (such as Postgres and/or Oracle), Linux, Shell scripting, GitLab, Airflow, Cassandra & Elasticsearch.
  • Experience in big data processing tools/languages using Apache Spark Scala.
  • Experience with orchestration tools: Apache Airflow or similar tools.
  • Strong knowledge on Unix/Linux OS, commands, shell scripting, python, JSON, YAML.
  • Agile scrum experience in application development is required. 
  • Strong knowledge  in AWS S3, PostgreSQL or MySQL.
  • Strong knowledge  in  AWS Compute: EC2, EMR, AWS Lambda.
  • Strong knowledge in Gitlab /Bitbucket .
  • AWS Certification is a plus
  • "Big data" systems and analysis
  • Experience with data warehouses or data lakes

  • Mindset and attributes
  • Strong communication skills with ability to communicate complex technical concepts and align organization on decisions
  • Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply
  • Utilizes team collaboration to create innovative solutions efficiently
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow AWS Big Data Bitbucket Cassandra Computer Science Data quality Docker EC2 Elasticsearch Engineering GitLab Hadoop Java JSON Lambda Linux MySQL Oracle Pipelines PostgreSQL Python RDBMS Scala Scrum Shell scripting Spark SQL

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  4  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.