Data Engineer (Berlin/Paris)

Berlin, Berlin, Germany

Applications have closed
PriceHubble logo
PriceHubble

Posted 1 month ago

PriceHubble is a PropTech company, set to radically improve the understanding and transparency of real estate markets based on data-driven insights. We aggregate and analyse a wide variety of data, run big data analytics and use state-of-the-art machine learning to generate high-quality valuations and predictive analytics for the real estate market. We are headquartered in Zürich, with offices in Berlin, Paris, Tokyo and Vienna. We work on international markets and we are backed by world-class investors. We have a startup environment, low bureaucracy, and an international team and business.

Your role

Data engineers are the central productive force of PriceHubble. You will be given the responsibility for substantial parts of data engineering systems that feed our valuation models for real estate.

Your daily challenges will be ranging from:

  • mining a wide range and variety of new datasets of all sorts
  • performing large scale spatial data operations

Doing so will expose you to a wide variety of tasks ranging from improving our big data infrastructure to generating pipelines to process and expose new data sources.


Your Mindset

You are convinced that success in data science is achieved via data-monopolies. You are highly motivated to join an organization committed to building the best in class data-engineering software for acquiring, processing, and enriching real-estate data.

The following challenges speak to you:

  • gather vast amounts of data about real estate,
  • consolidate, improve, and link these data to generate high-quality datasets,
  • scale data pipelines with global constraints.


Responsibilities

  • Extract, cleanup, structure and transform complex raw and processed datasets to extract insights from it
  • Retrieve a wide variety of datasets and integrate them into the data pipeline
  • Create and maintain an efficient data infrastructure
  • Build data enrichment pipelines, using machine-learning when appropriate
  • Continuously provide new ideas to improve our engines and products

Requirements

  • MSc in Computer Science or equivalent
  • Proficiency in at least one object-oriented programming language and at least one scripting language; Python is a strong advantage
  • In-depth understanding of basic data structures and algorithms
  • Familiarity with software engineering best practices (clean code, code review, test-driven development, ...) and version control systems
  • Experience with the ETL and data processing tools we’re using is an advantage (PySpark,, Airflow, PostgreSQL...)
  • Working experience with cloud providers (Google cloud, AWS or Azure)
  • Advanced knowledge of relational databases and SQL
  • Experience with Docker and Kubernetes orchestration is a strong advantage
  • Understanding of core machine learning concepts is an advantage
  • Comfortable working in English; you have a great read, good spoken command of it

Benefits

🕓Flexible work hours

👖Casual dress code

🍏Free snacks, fruits, coffee, beers, sodas

🍺Thursday drinks

✈️Relocation package

📘L&D program

🏢Well-located offices

💰Competitive salary

Job tags: Airflow AWS Big Data Data Analytics Data pipelines Engineering ETL Kubernetes Machine Learning PySpark Python SQL
Job region(s): Europe
Job stats:  21  6  0

More AI/ML/Data Science position highlights