Senior Data Engineer

Bengaluru, Karnataka, India

Amagi

Providing channel creation, content distribution, and CTV advertising solutions for FAST, OTT, and broadcast TV.

View company page

About Amagi

Amagi is a cloud-native SaaS platform that lets every content owner deliver their content to consumers anytime, anywhere, to any device. Amagi helps bring entertainment to hundreds of millions of consumers leading the transformation in media consumption. We believe in a connected ecosystem bringing content owners, distribution platforms, consumers and advertisers together to create great experiences. 

Amagi grew by 136% last year and is on its way to double itself again this year. The market leader in FAST (Free Ad-supported Streaming TV), it delivers more than 500 media brands to 1500+ end points and is growing exponentially. 

Team

ADAP team is responsible for building a data management platform which includes optimized storage for the entire org,  managed compute and orchestrations framework including concepts of serverless data solutions, managing a central data warehouse for both batch and streaming use cases, building connectors for different sources, building cost optimization solutions on Databricks and similar pieces of Infrastructure.

The team is also responsible for enabling insights for multiple teams, including Audience Segments, Ads, Observability, Contextual Targeting Models, and Personalisation.

The team will also be a center for Data Engineering excellence, driving training and knowledge-sharing sessions with a large data consumer base within Amagi. 

Location: Bangalore, India

Requirements

  • Build, deploy and maintain a highly scalable data pipeline framework which enables our developers to build multiple data pipelines from different kinds of data sources.
  • Collaborate with the product, business, design and engineering functions to be on top of your team’s deliverables & milestones.
  • Timely delivery of highly reliable and scalable engineering architecture, and high quality, maintainable and operationally excellent code for your team.
  • Participate in design discussions and code reviews.
  • Set up best practices, gatekeeper, guidelines and standards in the team.
  • Identify and resolve performance and scalability issues.

You will excel at this role, if you have

  • Bachelor’s/master’s degree in Computer Science with 2+ years of overall experience
  • Deep understanding of ETL frameworks eg. Spark, MapReduce or equivalent systems.
  • Deep understanding of at least one of ETL technologies like Dataproc, PySpark, Trino, Presto, Hive.
  • Building complex pipelines using orchestration frameworks like Apache Airflow, Argo or similar.
  • Building observability with technologies like logging, datadog, prometheus, sentry, grafana, splunk, EKS etc.
  • Sound knowledge of Python and frameworks like Django, Flask etc.
  • Good to have: knowledge in public clouds (AWS, GCP etc.) is preferred.
  • Excellent technical skills and communication skills

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Architecture AWS Computer Science Databricks Data management Data pipelines Dataproc Data warehouse Django Engineering ETL Excel Flask GCP Grafana Pipelines PySpark Python Spark Splunk Streaming

Region: Asia/Pacific
Country: India
Job stats:  2  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.