Associate/ Senior Associate Vice President - Data Engineer (f/m/d)

Frankfurt am Main, DE

Applications have closed

Deutsche Börse

Die offizielle Website der Gruppe Deutsche Börse mit Informationen zum Unternehmen und den Bereichen Investor Relations, Media, Karriere, Nachhaltigkeit und Regulierung.

View company page

Your area of work:
StatistiX® is the Data platform of Deutsche Börse Group, serving vital business functions in Trading, Clearing, and Risk Management. It provides users around the world with: Invaluable market statistics, streamlined data services, powerful business intelligence and analytics. You'll join a dynamic, results-oriented team dedicated to delivering rock-solid, efficient data solutions exactly when the business needs them. If you're a creative problem-solver who thrives in a fast-paced, tech-driven environment that adds value to the business, this is the perfect opportunity for you!


Your responsibilities: 

•    Design, develop, and implement data pipelines using Apache Spark and other big data technologies on Big Data Platform in both cloud and on-premise environments.
•    Automate data ingestion, transformation, and loading (ELT) processes.
•    Monitor and optimize data pipelines for peak performance in both cloud and on-premise environments.
•    Collaborate with data analysts and stakeholders to understand data needs and identify potential use cases for analytics and machine learning.
•    Translate data insights into actionable recommendations and reports for business stakeholders.
•    Work closely with data analysts, data scientists, and other stakeholders to understand data requirements
•    Develop and maintain documentation for data pipelines.
•    Stay up-to-date on the latest big data technologies and trends.

 

Your profile: 
•    University degree in Computer Science or similar, technical subject.
•    At least 3-5 years relevant work experience.   
•    Design & development experience with AMQP, Spark, Kafka & Java.
•    Strong programming skills in Python or PySpark or Scala.
•    Good understanding of Data Management & Governance concepts.
•    Data cataloguing  experience with Google Dataplex or Apache Atlas.
•    Experience with SQL and relational databases.
•    Excellent written and verbal communication skills in English. Fluency in German will be an added advantage.
•    Self-driven attitude to learn new functionalities as well as technologies to be bring the team to next level. On the job, we’ll help you to learn or deepen your know how in the following state-of-the-art tools and technologies:
o    BI reporting on Bigdata platforms e.g. Cloudera / Data bricks / GCP
o    Data engineering in GCP
o    Usage of CI/CD pipeline and other DevOps methodologies
o    Job orchestration for both event based and scheduler based with Airflow & Control-M as a jobs scheduler
o    General knowledge of financial markets


You have know-how in one or more of these fields already? Even better, lets’s meet then!

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Big Data Business Intelligence CI/CD Computer Science Databricks Data management Data pipelines DevOps ELT Engineering GCP Java Kafka Machine Learning Pipelines PySpark Python RDBMS Scala Spark SQL Statistics

Region: Europe
Country: Germany
Job stats:  11  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.