Data Engineer - Kyiv

Kyiv, Ukraine

Applications have closed

Lyft

Rideshare with Lyft. Lyft is your friend with a car, whenever you need one. Download the app and get a ride from a friendly driver within minutes.

View company page

At Lyft, our mission is to improve people’s lives with the world’s best transportation. To do this, we start with our own community by creating an open, inclusive, and diverse organization.

Here at Lyft, Data is the only way we make decisions. It is the core of our business, helping us create a transportation experience for our customers and providing insights into our product launch & features' effectiveness.

Also, Data is at the heart of the mapping platform at Lyft. We collect and process thousands of terabytes of data from various sources and use it to generate real-time traffic estimations, make correct and efficient routing decisions, and determine better pickup and drop-off locations for our passengers. Making this data accessible and useful to everyone at Lyft requires tackling data infrastructure, data engineering, and computing challenges. And now we are looking for experienced data engineers to join us!

Responsibilities:
  • Assemble and manage large, complex sets of data that meet non-functional and functional business requirements
  • Design and evolve data models and data schema based on business and engineering needs
  • Build and support ETL pipelines using tools like Hive, Airflow, Flyte, and SQL technologies
  • Implement systems to track data quality and consistency
  • Build analytical tools that provide insight into key performance metrics across Mapping and Lyft
  • Work with stakeholders across science, engineering, data infrastructure, product, and our leadership, driving resolving data-related technical problems
Experience:
  • 3+ years of experience in software engineering, ideally with a focus on Data Engineering/Architecture
  • Ability to work with complex production-quality software
  • Technical expertise in data infrastructure, cloud computing, storage systems, and distributed computing frameworks in multi-petabyte scale systems
  • Knowledge of modern data/computing infrastructure and frameworks.
    Today we use S3, DynamoDB, Kafka, ElasticSearch, Spark, Flyte, Stackdriver, and SageMaker. We do not expect you to know them all but would like for you to be familiar with some
  • Openness to new or different ideas, and the ability to evaluate multiple approaches and choose the best one based on fundamental qualities and supporting data
  • Ability to communicate highly technical problems working along our cross-functional team
  • Ability to communicate in English fluently in various forms e.g technical document, meetings, presentations

Tags: Airflow DynamoDB Elasticsearch Engineering ETL Kafka Pipelines SageMaker Spark SQL

Region: Europe
Countries: Ukraine United Kingdom
Job stats:  14  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.