Data Engineer

Lisbon, Portugal

Grünenthal

Grünenthal is a global leader in pain management and related diseases. Our purpose is to change lives for the better – and innovation is our passion. We focus our activities on moving towards our vision of a world free of pain.

View company page

 

Our Corporate Functions teams make sure everything at Grünenthal runs smoothly by bringing together people with a wide range of skills – from experts in finance, IT, HR, law and compliance through to business strategists and creative media minds. If you’re a great collaborator and a strong communicator, you’ll feel right at home here. Join our team and join forces with colleagues across our business to maximise our company’s positive impact on patients and their families.

 

What the job looks like:

  • Design, develop, deploy, and maintain efficient and scalable and robust data pipelines;
  • Collaborate with data scientists, data analysts, and other stakeholders to understand data requirements and implement solutions;
  • Ensure the security, integrity, and availability of data through proper DataOps, DevOps, CI/CD, data governance and quality assurance practices;
  • Optimize and troubleshoot existing data infrastructure to improve performance and reliability;
  • Collaborate with cross-functional teams to integrate data from various sources and formats;
  • Stay abreast of emerging technologies and trends in the data engineering space, making recommendations for continuous improvement;
  • Develop document and communicate data engineering processes, best practices, and standards within our Global Operations’ Data & Analytics team.

 

What you'll bring to the table:

  • Master’s degree in a relevant field such as Business Administration, Business Analytics, Computer Science, Data Science, Civil Engineering, Information Technology, Mathematics, Physics, Software Engineering, Statistics, or a quantitative related discipline;
  • Proven track record of building sustainable pipelines with CI/CD adhering to DataOps & DevOps;
  • Proficiency using Azure Data Factory, Azure Databricks, Azure DevOps is required;
  • Proficiency in Python, PySpark and SQL is required;
  • Proficiency in Cypher and Neo4j is a strong plus;
  • Familiarity with Cloud Infrastructure, Linux-VMs, SQL Databases, Graph Databases and/or Databricks Lakehouse is a plus;
  • Hands-on mentality;
  • Excellent attention to detail and ability to work with large and complex datasets;
  • Strong communication skills to convey complex findings to both technical and non-technical stakeholders;
  • Ability to work independently and collaboratively in a fast-paced environment;
  • Ability to prioritize and manage multiple tasks and projects simultaneously.

______________________________________



Meyer Frietjoff, the Hiring Manager

 

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Azure Business Analytics CI/CD Computer Science Databricks Data governance DataOps Data pipelines DevOps Engineering Finance Linux Mathematics Neo4j Physics Pipelines PySpark Python Security SQL Statistics

Region: Europe
Country: Portugal
Job stats:  10  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.