Senior Data Engineer
Beat is one of the most exciting companies to ever come out of the ride-hailing space. One city at a time, all across the globe we make transportation affordable, convenient, and safe for everyone. We also help hundreds of thousands of people earn extra income as drivers.
Today we are the fastest-growing ride-hailing service in Latin America. But serving millions of rides every day pales in comparison to what lies ahead. Our plans for expansion are limitless. Our stellar engineering team operates across a number of European capitals where, right now, some of the world’s most ambitious and talented engineers are changing how cities will move in the future.
Beat is currently available in Greece, Peru, Chile, Colombia, Mexico and Argentina.
About the role
Data is at the heart of this effort and is an essential ingredient in Beat's aggressive growth plan and vision for the future. We’re currently transitioning to a microservices architecture which requires novel solutions when it comes to ingestion and leverage of the data from different sources.
As a member of our team, you will help tackle some of the most fundamental data-driven challenges we face and your work will impact the entire Beat experience.
Our remote workforce works East Europe Timezone hours (10am - 6pm) and therefore we will need you to be located within UTC to UTC+3 to reasonably overlap with your team members' work schedule. With the various tools and communication technologies we're using, you'll feel connected to your team. You always have the option to travel to our headquarters for meetings, events, and team bonding—or you can join virtually. Whatever works best for you and your work style.
What you'll be doing:
- Work closely with Data and Platform engineers for the design, development, enhancement and support of real-time data ingestion solutions.
- Develop the core libraries and tools that support data engineers across different teams.
- Develop components that will analyse, process and react to operational feeds in near real-time.
- Be agile both within and across teams, democratizing access to data for anyone within the organization.
What you need to have:
- Bachelor's or Master's degree in Computer Science or in a related Engineering field. Higher degrees are highly appreciated.
- Experience building and running large-scale real-time and batch data pipelines.
- Experience coding using both Object-Oriented and Functional Programming principles on top of the JVM.
- Good understanding of the way distributed storage and processing systems work.
What is good to have:
- Familiarity with the Apache Hadoop stack (YARN, HDFS, MapReduce and Hive).
- Hands-on experience with Apache Spark.
- Familiarity with ETL processes and industry best practices.
- Experience in developing with Scala.
- Familiarity with distributed messaging systems, preferably Apache Kafka.
- Hands-on experience with streaming technologies such as Spark Streaming, Apache Flink or Kafka Streams.
- Exposure to Kubernetes.
- Knowledge of relational databases and NoSQL technologies.
What's in it for you:
- Competitive salary package
- Flexible working hours
- High tech equipment and top line tools
- A great opportunity to grow and work with the most amazing people in the industry
- Being part of an environment that gives engineers large goals, autonomy, mentoring and creates incredible opportunities both for you and the company
Please note that you will be working as contractor.
As part of our dedication to the diversity of our workforce, Beat is committed to Equal Employment Opportunity without regard for race, color, national origin, ethnicity, gender, disability, sexual orientation, gender identity, or religion.