Data Engineer

Madrid, Comunidad de Madrid, Spain

Applications have closed

Scalefast

Scalefast is changing the way brands do direct-to-consumer eCommerce. With our flexible commerce cloud and rapid speed-to-market, experience eCommerce the way it should be.

View company page

Scalefast helps amazing brands develop a successful Direct-to-Consumer online business. Our next-generation technology platform is built to optimize modern eCommerce. Our end-to-end infrastructure includes global merchant-of-record agreements, fulfilment, subscription, loyalty programs and finance functions to deliver new revenue and delightful brand experiences. With a global presence (USA, Europe & Asia), Scalefast has proven itself with global brands like L’Oreal, Microsoft, Club Med, Hasbro and Sega.


On top of our end-to-end DTC eCommerce offering, we provide our clients with Air360, a first-class Customer Experience and Analytics platform, which enables brands to understand how users interact with websites and mobile apps to boost customer engagement and increase conversion rates.


The mission of the Data Engineer in Scalefast is to make data accessible so that internal teams and clients can use it to evaluate and optimize their performance and particular use cases. She/he works to build systems that collect, transform, and convert raw data into usable information for data analysts to interpret. The data engineer then contributes to the exploitation of the wealth of information and data collected by Scalefast.

The Data Engineer is a member of the Business Data Team and reports to the VP Business Data.


Responsibilities

The main responsibilities of the data engineer include:

  • Develop, construct, test and maintain the BI architecture and data sources in Scalefast
  • Align the BI architecture with business requirements
  • Data acquisition and ETL process: build the connections between different internal & external sources and the BI environment (Datalake)
  • Maintain the accurate content, format and integrities all through the lifetime of data in the BI environment.
  • Identify ways to improve data reliability, efficiency and quality in the BI environment
  • Use large data sets to address business issues (Terabytes of information)
  • Document the code, technical activities as if someone else should be able to use the code and processes
  • Keep track of security over Datalake to be compliant with GDPR and PII.

What we offer

  • Competitive salary and a career path adapted to each person's abilities and experience within a company that is growing continuously
  • 2 days a week working from home
  • A flexible schedule and total conciliation between work and family life including reduced timetable during one month in summer
  • Become part of a multi-cultural happy family where you can contribute with your experience and learn from the experience of others
  • Work with amazing brands
  • Get the opportunity to influence the future of our services and platform
  • Excellent working environment with frequent social activities (hackathons, Spartan races, quarterly whole-team social event)
  • Central Madrid office located an 8-minute walk from Atocha train station, with a bus stop and BiciMad station right outside the office
  • Kitchen and dining facilities as well as a fully stocked games room with games consoles etc. - great to disconnect from work for a while and have fun with your colleagues.
  • Discounted parking space in the office building if you’re coming by car, bicycle parking for those worried about their carbon footprint.
  • Access to private sales by some of the exclusive brands we work with
  • Enrollment in English lessons that take place during working hours
  • Mental Health Wellbeing Program

Requirements

  • Curriculum and degree in Computer Science, Big Data.
  • 2+ years of experience and a proven track record as a data engineer (creating, coding, maintaining).
    • Deep knowledge in SQL and NoSQL query languages. (Nice to have experience with Rds, MariaDb, DynamoDb, data + schema architecture (S3 + Athena).
    • Proven track record in creating and managing ETL.
    • Experience in different data architecture technology, frameworks, tools, with deep understand in clouds environments (good to have experience in AWS (must) additionally Azure, and/or Oracle, and/or GCP).
    • Knowledge and/or experience with Hadoop ecosystem.
    • Fluent in Python programming language, latest versions (code compliant with pep8, linting code, best practices for reusable code, open to pair programming and programming in teams).
    • Fluent in creating and maintaining Infrastructure as code (Terraform) is a plus.
    • Fluent in Gitlab projects (ci/cd over Gitlab programming also).
    • Fluent with Datadog.
    • Experience with data stack in AWS (S3, Lake formation, Glue, EMR, IAM, Api Gateway, Lambda, Athena, Kms, Sns, Secret Manager, Quicksigth, Rds, Step Functions, DynamoDb, EventBridge, Guarduty, CloudTrail, Ec2)
    • Experience working with other data experts (data analysts, data scientists) and IT experts

Soft skills

  • Rigor in coding, testing properly
  • Curiosity, active listening and willing to understand the business needs
  • Collaboration: work as a team, capable of working with the colleagues from the Business team as well as external technical teams
  • Force of proposal/ Problem-solving: capable of proposing solutions/improvements in case of issues
  • Communication/Promotion be able to describe and explain the activity, technical issues to non-data experts

Languages

  • English
  • French and Spanish are a plus

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: APIs Architecture Athena AWS Azure Big Data CI/CD Computer Science DynamoDB EC2 E-commerce ETL Finance GCP GitLab Hadoop Lambda MariaDB NoSQL Oracle Python Security SQL Terraform Testing

Perks/benefits: Competitive pay Flex hours Snacks / Drinks Team events

Region: Europe
Country: Spain
Job stats:  12  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.