Senior Data Engineer

Berlin, Berlin, Germany

PriceHubble

Leading the development of Data & explainable AI-driven real estate valuations and insights globally.

View company page

PriceHubble is a PropTech company, set to radically improve the understanding and transparency of real estate markets based on data-driven insights. We aggregate and analyse a wide variety of data, run big data analytics and use state-of-the art machine learning to generate stable and reliable valuations and predictive analytics for the real estate market. We are headquartered in Zürich, with offices in Paris, Berlin, Hamburg, Vienna, Prague, Amsterdam and Tokyo. We work on international markets, are backed by world-class investors and treasure a startup environment with low bureaucracy, high autonomy and focus on getting things done.

Your role

Data engineers are the central productive force of PriceHubble. As a Senior data engineer, your mission will be to guide the data-engineering work in PriceHubble. You will be given the responsibility for substantial parts of our data engineering systems. Your daily challenges will be to mine a wide range and variety of new datasets of all sort. Doing so will expose you to a wide variety of tasks ranging from building the infrastructure (Spark on Kubernetes), to building machine-learning models extracting features from raw data, to generating pipeline to process and expose new data sources.

Your Mindset

You are convinced that success in data science is achieved via data-monopolies. You are highly motivated to join an organisation who is committed to building the best in class data-engineering software for acquiring, processing, and enriching real-estate data.

The following challenges speak to you:

  • gather vast amounts of data about real estate
  • consolidate, improve, and link this data to generate data sets no one else has on the market
  • do that all over the world

You are keen to join a startup right in its growth phase, and are not afraid to refactor code to get it to the new engineering standards that will support the growth of the organisation.

At work, your team is your main asset: you are keen to mentor fellow team members. In the startup, you are committed to create the company you want to work in; in terms of competence, standards, and mindset.

Responsibilities

  • Extract, cleanup, structure and transform complex raw and processed datasets to extract insights from it
  • Retrieve a wide variety of datasets and integrate them into the data pipeline
  • Create and maintain an efficient data infrastructure
  • Build data enrichment pipelines, using machine-learning when appropriate
  • Continuously provide new ideas to improve our engines and products

Requirements

  • MSc in Computer Science or equivalent
  • At least 3 years of experience in a similar position
  • Proficiency in at least one object-oriented programming language and at least one scripting language; Python is a strong advantage
  • In-depth understanding of basic data structures and algorithms
  • Familiarity with software engineering best practices (clean code, code review, test-driven development, ...) and version control systems
  • Experience with the ETL and data processing tools we’re using is a strong advantage (PySpark, PostgreSQL, Airflow)
  • Working experience with cloud providers (GCP, AWS or Azure)
  • Advanced knowledge of relational databases and SQL
  • Experience with Docker and Kubernetes orchestration is a strong advantage
  • Understanding of core machine learning concepts is an advantage
  • Worked previously in ‘agile’ team(s) and are looking forward to doing it again
  • Comfortable working in English; you have a great read, good spoken command of it


* We are interested in every qualified candidate who is eligible to work in the European Union but we are not able to sponsor visas.

Benefits

Join an ambitious and hungry team and enjoy the following benefits:

💰 Competitive salary because we always want to attract the best talents.

📘 Learning & Development program - We want you to feel happy, confident about improving your skills, experience level as well as your personal development success.

🏢 Very well-located offices with a great remote work policy and the possibility to work from different places.

🕓 Flexible working hours and work life balance.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow AWS Azure Big Data Computer Science Data Analytics Docker Engineering ETL GCP Kubernetes Machine Learning OOP Pipelines PostgreSQL PySpark Python RDBMS Spark SQL TDD

Perks/benefits: Career development Competitive pay Flex hours Startup environment Team events

Region: Europe
Countries: Germany United States
Job stats:  3  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.