Data Engineer, Data Mesh (remote)
EMEA
Applications have closed
Our aim is to build a data ecosystem that connects everything, turning smart cities into cognitive cities - a data platform that has been designed to turn data into interoperable, high quality data products on a mass scale. Think of a new type of data mesh, the first of its kind, which will enrich the daily lives of citizens, whilst empowering sectors and services.
We’re looking for a Data Engineer who will be working together with our team of Data Architects, Data Analysts and DataOps on the development of a core information backbone for the cognitive city of the future. As a result, you will help us build a flexible, large-scale data platform to facilitate finding, understanding, and publishing data generated by people and devices in the AI-orchestrated metropolis.
What you will be doing
- Design, build and test data products that are large scale and complex and understand industry-recognized data modelling patterns and standards
- Build data pipelines, streaming based systems and data-driven microservices from the ground up
- Implement data flows to connect a variety of sources such as IoT & Sensors, operational systems and APIs
- Build data streaming systems to manage big data workloads for analytics and operational purposes
- Document source to target mappings and maintain relevant documentation for all production code
- Develop metadata-based solutions for describing data assets
- Liaise with data analysts, architects, and scientists to design and implement data services
- Design, analyze, map, and model data flow between data platform layers and components
What you will need
- 5+ years of relevant industry experience with large-scale distributed systems or platforms
- Experience using a general-purpose programming language (e.g., Python, Scala, Java, C# and/or Go)
- Deep understanding and experience with modern data architecture approaches and processes, including data lakes, data warehouses, data ingestion/integration, and data consumption
- Experience developing data pipelines using applications like Apache Kafka, Spark or Flink, Airflow and/or Dagster, and other open-source providers
- Experience with open-source databases such as MongoDB, PostgreSQL, Elastic Search, and big data solutions such as Hadoop. Advanced SQL knowledge is essential.
- Experience working with DevOps practices, Git version control, and agile approaches
- Experience developing for on-premises or major cloud platforms, such as Azure, AWS, and Google Cloud
- Knowledge of Data Mesh and decentralized data architecture patterns would be considered as a big plus
We’re helping to build a next generation cognitive city in the Middle East - a seamless AI-orchestrated metropolis, home and workplace for more than a million citizens from around the world.
We’re after enthusiastic, driven individuals who want to contribute to our journey and grow with us along the way. Can you hit the ground running? Do you have the drive to go faster? If so, get in touch!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture AWS Azure Big Data Dagster DataOps Data pipelines DevOps Distributed Systems Flink GCP Git Google Cloud Hadoop Kafka Microservices MongoDB Pipelines PostgreSQL Python Scala Spark SQL Streaming
Perks/benefits: Career development Flex hours
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Data Science Manager jobs
- Open MLOps Engineer jobs
- Open AI Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Sr Data Engineer jobs
- Open Data Engineer II jobs
- Open Data Manager jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Power BI Developer jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Data Scientist II jobs
- Open Senior Data Architect jobs
- Open Business Intelligence Developer jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Business Data Analyst jobs
- Open Data Quality Analyst jobs
- Open Data Product Manager jobs
- Open Junior Data Engineer jobs
- Open ETL Developer jobs
- Open Principal Data Scientist jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open Snowflake-related jobs
- Open PhD-related jobs
- Open NLP-related jobs
- Open CI/CD-related jobs
- Open Kubernetes-related jobs
- Open Airflow-related jobs
- Open Data governance-related jobs
- Open Databricks-related jobs
- Open Hadoop-related jobs
- Open LLMs-related jobs
- Open Data warehouse-related jobs