Data Engineer

Zürich, Zurich, Switzerland

Applications have closed

42matters

42matters offers a full suite of mobile and connected TV app intelligence solutions, including APIs, file dumps, and visual, web-based platforms.

View company page

42matters offers a full suite of products and services for App Intelligence and analytics. We bring a unique combination of technical and business skills to provide our customers with thorough analysis of the latest developments on the mobile app market and CTV industry. We work with the world’s leading mobile companies, helping them build better business through data and insights.

As a Data Engineer, you are someone who is passionate about collecting and processing large data sets, in particular data about mobile apps. You will be responsible for building and maintaining data pipelines to support our existing products and new ones as well. This role involves contributing to the entire stack, working on DevOps maintenance and tuning, alongside collaborating with other engineers to build, improve and maintain our applications and services. The best match for this position would be someone with extensive hands-on experience, knowledge of crawlers development, skills with both Java and Python, and who can collaborate in a team but also work autonomously where needed. You can find more about our company culture under: https://42matters.com/jobs

Job type: Full-time

Location: Zurich

Starting date: As soon as possible


RESPONSIBILITIES

  • Write data pipelines to extract, transform and load (ETL) data automatically, using a variety of traditional as well as large-scale distributed technologies.
  • Write and maintain crawlers that aggregate data from various sources and make sure that data is in good quality.
  • Extend and optimize the current services and applications by making large amounts of data accessible for both our data scientists and our customers (via our services/products).
  • Help build a reliable, sustainable and scalable data infrastructure.
  • Start new projects or rewrite existing ones.

Requirements

  • 2+ years of work experience with automated data collection and cleaning.
  • Experience with Amazon Web Services (e.g. ECS, RDS, Elasticsearch, Beanstalk).
  • Good knowledge of Java and/or Python.
  • Experience with Docker.
  • Familiar with relational data stores (e.g. PostgreSQL, Redshift).
  • Good abilities in DevOps.
  • Fluent English.
  • Self-motivated, team player comfortable in a small, intense and high-growth start-up environment.


PREFERRED QUALIFICATIONS

  • Strong educational background: Bachelor/Master degree in CS or other technical/science/math field.
  • Proficiency in other programming languages (e.g. Bash, HTML, Javascript).
  • Experience with major crawling libraries (e.g. jsoup, selenium, scrapy, beautifulsoup)
  • Ability to identify and resolve performance issues.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Data pipelines DevOps Docker ECS Elasticsearch ETL JavaScript Mathematics Pipelines PostgreSQL Python Redshift

Perks/benefits: Startup environment

Region: Europe
Country: Switzerland
Job stats:  22  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.