Data Engineer, Metrics - Kyiv

Kyiv, Ukraine

Applications have closed

Lyft

Rideshare with Lyft. Lyft is your friend with a car, whenever you need one. Download the app and get a ride from a friendly driver within minutes.

View company page

At Lyft, our mission is to improve people’s lives with the world’s best transportation. To do this, we start with our own community by creating an open, inclusive, and diverse organization. 

Here at Lyft, high-quality and reliable metrics play an important role in the decision-making process. The metrics help provide insights into the effectiveness of our product launch & features and let us track our goals and missions.

As a software engineer, you will be part of the team that builds the new generation of the metrics platform to serve reliable and high-quality metrics. You will get the chance to design and build the backend APIs for creating and serving the source of truth metrics metadata across Lyft to different downstream apps such as A/B testing and reporting; You will own the data pipelines that powers the Lyft topline and domain metrics; You also get the chance to build the user-facing frontend features to allow users to manage their metrics. Your efforts will ensure the metrics being used across Lyft are high quality, and people can access business/user insights to make data-informed decisions.

Responsibilities:
  • Work with teams to gather requirements and align on the design of the metrics platform features
  • Create and own the roadmap for the metrics platform projects in partnership with PMs and Data Scientists
  • Design and evolve data models to handle different metrics use cases
  • Design and create relational and noSQL databases for persisting data objects 
  • Design and implement metric platforms backend services to provide APIs for supporting metrics metadata operations and various downstream use cases
  • Owner of the data pipelines for metrics computation and A/B testing stats computation. Build and maintain scalable data pipelines
Experience:
  • Proficient in at least one of the programming languages such as Python, Java, C++, Go
  • Write well-crafted, well-tested, readable, maintainable code 
  • Experience with SQL and relational databases (Postgres, Mysql, Oracle, or similar)
  • Experience with Kubernetes (k8s), Envoy, Kafka, and/or AWS is a plus
  • Experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, Pig, HBase, Parquet)
  • Experience with workflow management tools (Airflow or similar)
  • Experience with frontend development is not required, but a plus (HTML, CSS, Javascript, Node, Redux, Flux)

Tags: A/B testing Airflow APIs AWS Data pipelines Hadoop HBase HDFS JavaScript Kafka Kubernetes MySQL NoSQL Oracle Parquet Pipelines PostgreSQL Python RDBMS Spark SQL Testing

Region: Europe
Countries: Ukraine United Kingdom
Job stats:  20  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.