Data Engineer

Zwolle, Netherlands

Applications have closed

wehkamp

De ontwikkelingen in retail gaan razendsnel. Daarom vinden wij onszelf keer op keer opnieuw uit voor onze klant. En dat al meer dan 70 jaar!

View company page

Company Description

About Wehkamp
Wehkamp believes in combining fashion and lifestyle in a smart way with online technology and a great shopping experience, in order to offer a relevant and inspiring platform to its customers and partners. We are ambitious! We must be, with 2.9 million regular customers and about 600.000 visitors a day. In our assortment you will find more than 300.000 different products and over 2.500 brands. 75% of our customers shop on a mobile device; mainly through the app.

About Wehkamp Tech
With over 100 tech colleagues from all over the world we work on offering a relevant assortment at the right time. We use many technologies to accomplish this, like microservices in containers and functions as a service. We also collect a lot of data that we use to train our machine learning algorithms. We encourage everyone to think ahead and find smart new ways to use online technology to offer our customers the best experience. Technological innovation is not only crucial for the best shopping experience but it also plays a key role in the logistic processes.

About our Data Tech stack
Pypark, Databricks (Delta), Kafka, AWS (S3), Terraform
Grafana, Redis, ElasticSearch, Postgress, BigQuery, Google Analytics (4)
BigQuery, Jira, Oracla, Kinisis

Job Description

The data engineering department within Wehkamp Retail Group (WRG) is building its next version of our data achitecture on our delta/data lake. In doing so, we are looking for a (senior) data engineer to help us realize the enablement of teams with data via our (near) real time data platform.

As a data engineer you will help the WRG develop scalable data processing pipelines. In the last few years we developed a framework in which we  proved to successfully enable the company with scalable (near) realtime data solutions via AWS / Spark on our Databricks platform. Our next goal will be to leverage this framework and reach a level of maturity, by for instance implementing a multi label approach, reliable CI/CD processes, high quality data governance and optimized data flow, collection and integration across teams. The ideal candidate gets energy from building data pipelines, redesigning data architectures, challenge other team members and has experience in working via Agile methodologies.

The team
You will be working in the data engineering team, which is part of the tech-core department. Within this team we share knowledge, have a high degree of freedom for implementing new ideas and strive to deliver quality. We are frontrunners when it comes to implementing new innovative Databricks / Spark products and like to present our findings.

Responsibilities

  • Help building the infrastructure, architecture and data pipelines to enable (near) real time data processing and deliverance;
  • Deliver valuable data solutions that meet functional requirements and product specifications;
  • Consult and work together with our stakeholders (e.g. data science, reporting) on data engineering solutions, from a consume and produce point of view;
  • Identify opportunities, improvements and solutions through research and proof-of-concepts to shape our roadmap and enable data as a strategic business asset.

You will report to the Tech Lead of the domain team you will join. The team will be defined closer to your starting date, based on what you and us consider the best fit at that point in time.

Qualifications

What do you have?  

  • Experience of at least 3 years either as a Data Engineer or a Software Engineer with a focus on building (scalable) data pipelines in distributed environments;
  • Experience working with AWS, Python, Spark / Pyspark, SQL, NoSQL. Experience with Databricks (Delta) and/or stream-processing systems like Kafka are a plus;
  • One or both of:
    • DevOps (Terraform / Git / AWS)
    • Data warehousing (etl)
  • Be self – motivated, you foster a growth mindset and believe in lifelong learning;
  • Domain knowledge in retail or finance is a plus;
  • A bachelor or master's degree in areas like Mathematics, Physics, Computer Science, Engineering or Econometrics, Business Analytics is a plus;
  • You have an EU citizenship.

What we offer

You become part of the Wehkamp Tech Hub. Working at the Tech Hub means working in an innovative and inspiring high-tech working environment. In our new head office, the Wehkamp Tech Hub has its own environment with plenty of room for collaboration, learning from each other and presenting results.

Please check out: https://medium.com/wehkamp-techblog

Other benefits

  • Possibility to work mostly remote;
  • Money;
  • 10.5% holiday allowance;
  • 30 holiday days;
  • Annual performance bonuses;
  • Onsite gym;
  • Pension scheme;
  • Discounts on health insurance;
  • Room for growth;
  • Staff discount on almost the entire range of Wehkamp & Kleertjes.com.

Additional Information

Transparency is very important in the application process. After you express your interest with your application, you will immediately receive a link to a personal status page, in which you can find the status of your application and all communication.

What happens after your application?
The process for this vacancy is as following: 

  1. Do your expectations and our wishes match? Then we will invite you to a (video) call with the recruiter.
  2. First interview with the hiring manager and tech lead, this will focus on the cultural and technical fit. 
  3. If that is positive we’ll move on to a case or technical assessment, based on this you’ll have the second interview. 

We strive to give feedback as soon as possible and try to wrap the whole recruitment process within two weeks.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture AWS BigQuery Business Analytics CI/CD Computer Science Databricks Data governance Data pipelines Data Warehousing DevOps Econometrics Elasticsearch Engineering ETL Finance Git Grafana Jira Kafka Machine Learning Mathematics Microservices NoSQL Physics Pipelines PySpark Python Research Spark SQL Terraform

Perks/benefits: Career development Health care Startup environment Transparency

Region: Europe
Country: Netherlands
Job stats:  23  4  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.