Data Engineer

Remote (United Kingdom)

Apply now Apply later

👋 Hey there, join us in revolutionising the way we understand and mitigate human risk!   

  

We’re looking for a Data Engineer to join as our first hire into our new Data team.   

  

We want this person to be responsible for Data Pipelines, Data Architecture & Design, Data Visualisation and all things ETL.  The ideal candidate will have strong experience with Databricks and programming skills in Python (PySpark) and SQL as well as having great knowledge of data modelling techniques. 

  

Who are we?   

  
At CultureAI, we aim to help employees prevent cyber breaches. We're looking for passionate, driven people to join our team and help us transform how organisations manage human cyber risk. We're disrupting the security awareness industry with our data-driven approach to measuring employee security behaviours and driving personalised security coaching and interventions - we'd love for you to join us!! 

 

Our mission is to make the world more secure. We see direct parallels between how we elevate security through our company and empower it within. We believe in creating a safe space for our employees to express, innovate and educate. Diversity and inclusion are at the core of what we do, helping us drive our security ambitions and making the world a more secure place for all.   

  

Day in the life   

  

  • You will be collaborating with multiple stakeholders to understand their data requirements including reporting needs and then design scalable data architectures to meet these requirements.  

  • Developing and maintaining Data pipelines using Databricks and some use of Fivetran.  

  • Writing efficient and optimised data processing Python code using Databricks and Spark 

  • Performing data cleansing, enrichment, and transformation to prepare data for analysis and reporting. 

  • Develop and maintain PowerBI dashboards to support decision making across the whole business.  

  • Collaborate with the internal security experts to establish and enforce data policies and standards 

  • Create and maintain comprehensive documentation for data architectures, pipelines, and processes 

  • Share knowledge and best practices with team members and stakeholders 

 

What you bring to the team   

  
The ideal candidate will have strong experience with a range of responsibilities from collaborating with senior stakeholders to make sure the data needs of their team are satisfied in an efficient manner to designing and implementing data models using Databricks on AWS.  

  

  • Strong programming skills in Python, PySpark and SQL.  

  • A good level of Experience with Databricks or Apache Spark 

  • Experience with relational databases (MySQL). 

  • Experience in designing and implementing data integration pipelines using tools like FiveTran ETL and using the Medallion model for data cleansing, enrichment and data validation.  

  • Understanding of data warehousing concepts and architectures, including good knowledge of data modelling techniques, such as the STAR model and normalisation.  

  • Knowledge of data partitioning, indexing and query optimisation techniques.  

  • Experience of data streaming using technologies such as Apache Kafka and Spark Streaming 

  • Proficiency in creating reports using PowerBI.  

 

What do we offer? 
We want to make sure you feel taken care of and are getting the best rewards possible. 
Here is a taste of some of the benefits you’ll receive as a valued member of the CultureAI team: 

 

⚕ Private healthcare scheme 
💸 Share options 
🏖 30 days holiday (including 3 CultureAI closure days to prioritise rest for our team at the end of the year) + bank holidays 
🐶 Remote work or pet-friendly offices, the choice is yours 
📖 Training opportunities 

🕹️Monthly early-finish gaming Fridays 
🎉 Regular socials (non-compulsory and not just going to the pub!) 
Plus many more!!! 

 

Next steps 

Stage 1 - Exploring your goals with our internal Recruiter 
Stage 2 - Deep dive with our VP of Engineering 

Stage 3 – Completion of a Take Home Task 
Stage 4 - Panel interview with our technical team. 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  5  2  0
Category: Engineering Jobs

Tags: Architecture AWS Databricks Data pipelines Data Warehousing Engineering ETL FiveTran Kafka MySQL Pipelines Power BI PySpark Python RDBMS Security Spark SQL Streaming

Perks/benefits: Equity / stock options Startup environment

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.