Data Engineer

Paris, Île-de-France, France - Remote

PriceHubble

Leading the development of Data & explainable AI-driven real estate valuations and insights globally.

View company page

About PriceHubble

PriceHubble is a PropTech company, set to radically improve the understanding and transparency of real estate markets based on data-driven insights. We aggregate and analyze a wide variety of data, run big data analytics, and use state-of-the art machine learning to generate stable and reliable valuations and predictions for the real estate market. We are headquartered in Zürich, with offices in Paris, Berlin, Vienna, Hamburg, Amsterdam, Prague and Tokyo. We work on international markets, are backed by world-class investors, and treasure a startup environment with low bureaucracy, high autonomy and focus on getting things done.

Your role

Data engineers are the central productive force of PriceHubble. As a mid-level data engineer, your mission will be to improve the data-engineering work in PriceHubble. You will be given the responsibility for parts of our data engineering systems. Your daily challenges will be to add, improve, and maintain a wide range and variety of datasets. Doing so will expose you to a wide variety of tasks ranging

  1. from building the infrastructure (Spark on Kubernetes),
  2. through generating pipeline to process and expose new data sources,
  3. to building machine-learning models extracting features from raw data.

Your Mindset

You are convinced that success in data science is achieved via data-monopolies. You are highly motivated to join an organization who is committed to building the best in class data-engineering software for acquiring, processing, and enriching real-estate data.

The following challenges speak to you:

  • gather vast amounts of data about real estate
  • consolidate, improve, and link this data to generate data sets no one else has on the market
  • do that all over the world

You are keen to join a startup right in its growth phase, and are not afraid to refactor code to get it to the new engineering standards that will support the growth of the organisation.

At work, your team is your main asset: you are keen to mentor junior team members. In the startup, you are committed to create the company you want to work in; in terms of competence, standards, and mindset.

Responsibilities

  • Extract, clean, structure and transform complex raw and processed datasets to extract insights from them
  • Retrieve a wide variety of datasets and integrate them into the data pipeline
  • Create and maintain an efficient data infrastructure
  • Build data enrichment pipelines, using machine-learning when appropriate
  • Continuously provide new ideas to improve our engines and products

Requirements

  • MSc in Computer Science or equivalent
  • At least 2 years of experience in a similar position
  • Proficiency in at least one object-oriented programming language and at least one scripting language;
  • In-depth understanding of basic data structures and algorithms
  • Familiarity with software engineering best practices (clean code, code review, test-driven development, ...) and version control systems
  • Advanced knowledge of relational databases and SQL
  • Comfortable working in English; you have a great read, good spoken command of it

Nice to have

  • Python is a strong advantage
  • Experience with the ETL and data processing tools we’re using is a strong advantage:
    • PySpark, PostgreSQL, Airflow
  • Experience with Docker and Kubernetes orchestration is a strong advantage
  • Working experience with cloud providers (GCP, AWS or Azure)
  • Understanding of core machine learning concepts is an advantage
  • Worked previously in ‘agile’ team(s) and looking forward to doing it again


* We are interested in every qualified candidate who is eligible to work in the European Union but we are not able to sponsor visas.

Benefits

Join an ambitious and hungry team and enjoy the following benefits:

💰 Competitive salary because we always want to attract the best talents.

📘 Learning & Development program - We want you to feel happy, confident about improving your skills, experience level as well as your personal development success.

🏢 Very well-located offices with a great remote work policy and the possibility to work from different places.

🕓 Flexible working hours and work life balance.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow AWS Azure Big Data Computer Science Data Analytics Docker Engineering ETL GCP Kubernetes Machine Learning OOP Pipelines PostgreSQL PySpark Python RDBMS Spark SQL TDD

Perks/benefits: Career development Competitive pay Flex hours Startup environment Team events

Regions: Remote/Anywhere Europe
Country: France
Job stats:  16  4  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.