Lead Data Engineer

London, England, United Kingdom

Applications have closed

Flock

Fair and flexible motor fleet insurance for cars, vans, and mixed fleets. Expert support, online policies and 24/7 claims. With Flock, safer fleets pay less.

View company page

Who are we?

Flock is a fully digital insurance company for commercial motor fleets, on a mission to make the world quantifiably safer.

With Flock, safer fleets pay less. Hundreds of companies trust us to protect their vehicles and drivers with connected insurance that enables and incentivises safer driving.

We're proud to be supported by some of the world's leading VCs, including Chamath/Social Capital and Anthemis. Our aim is to become the go-to insurer for connected and autonomous vehicles.

We are now investing heavily in what we know to be the key to our future success - our people.


Purpose of the Role

To develop our next-generation digital insurance platform and data architecture. Working across the full data stack to design, build and deploy features using cutting-edge technologies across AWS, Snowflake, and dbtcloud. You will play a senior role in the influence of our architectural and engineering decisions and bring suggestions for new tools and technologies we should be using to maintain our edge. Empowered to exercise your own judgment in key decisions while working in a highly-skilled Agile squad

As the Lead Data Engineer you will:

  • Design, develop and implement a data architecture and roadmap that supports business objectives and enables data-driven decision making
  • Build, maintain and improve data models that support business processes and decision making
  • Take responsibility for the data accuracy and integrity surfaced (by whatever means) within the business, in a way that is auditable and verifiable, and has the ability to proactively surface anomalies for investigation
  • Develop, maintain and optimise data warehousing and storage solutions, including, but not limited to; dbtcloud, fivetran, snowflake, AWS RDS (using Postgres), S3, DynamoDB
  • Create and manage data pipelines for data ingestion, processing and analytics
  • Collaborate with the tech team and data scientists to build pre and post-processing data pipelines
  • Ensure data security, privacy, and data governance standards are met
  • Develop and maintain documentation of data architecture, data models, data warehousing solutions and pipelines
  • Empower power users such as finance, revenue, and insurance teams to access and leverage data effectively
  • Stay current with new technologies and best practices in data engineering and bring innovation to the team
  • Steering and managing the day to day demands of the business between BAU requests and change initiatives.
  • Eliciting reporting requirements from the business and leading their triage, elaboration and delivery
  • Delivering high-quality, well-tested data features into the production system on a regular basis
  • Reviewing other engineers’ code via Pull Requests on Github
  • Pair programming with other data and analytics engineers
  • Mentoring less experienced team members
  • Taking part in Agile ceremonies such as Planning, Retros, Standups etc.
  • Working with the Product and Tech team to design and estimate features
  • Proactively research new technologies and bring them to the team to help maintain our edge

Requirements

What our ideal candidate will have:

  • Significant experience working as a data engineer or full stack analytics engineer (Looker is a bonus but not essential). We expect to see very strong dbt experience.
  • Significant experience with AWS data technologies, including RDS, S3 and Dynamo DB
  • Experience with data streaming technologies such as Kinesis
  • Experience with orchestration tools such as Airflow
  • Experience dealing with large amounts of real-time data
  • Experience working in a fast-paced Agile development environment
  • Understanding and experience of CI/CD
  • Experience with relational databases (e.g. PostgreSQL)

The wow factor (not required but the stuff we love to see!)

  • Experience with TypeScript, NodeJS
  • Experience of implementing observability and monitoring solutions in AWS
  • A strong understanding of data governance and quality control procedures
  • An interest in data science
  • Open source contributions or other engagement with the software development community
  • Experience working at high-growth startup
  • A BSc or above in Computer Science/Mathematics or a related field
  • An understanding of how insurance works

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow Architecture AWS CI/CD Computer Science Data governance Data pipelines Data Warehousing DynamoDB Engineering Finance FiveTran GitHub Kinesis Looker Mathematics Node.js Open Source Pipelines PostgreSQL Privacy RDBMS Research Security Snowflake Streaming TypeScript

Perks/benefits: Startup environment

Region: Europe
Country: United Kingdom
Job stats:  11  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.