Data Engineer - Search

United Kingdom - Remote

Applications have closed

Cytora

Cytora transforms underwriting for commercial insurance. Our platform helps insurers to underwrite more accurately, reduce frictional costs, and achieve profitable growth.

View company page

We are a high-growth startup using data and machine learning to revolutionise the insurance industry. You will be joining an established team, working to build products that are fundamentally changing the way insurers see the world, enabling them to move from an assumption based understanding of risk, to an empirical, data-driven view.

Our data products, which are used by international insurers, ensure that insurers have access to more data than ever that can be used to dramatically accelerate their learning. To this end, we help them acquire data by extracting information from structured and unstructured documents using CV and NLP, through dataset linkage and entity resolution, to using internal and external data to provide insight and prediction (incl. human-assisted ML).

Responsibilities include:

  • Building, managing and monitoring ETL pipelines
  • Finding, acquiring, performing first analysis, and ingesting new data sources
  • Building tools for data validation - data quality checks, data lineage, monitoring, tracking changes
  • Help develop our data and search platform and continuously evolve data pipelines, components, processes and documentation

Requirements

  • Python experience (upper mid to expert level according to the Pluralsight valuation mechanism, at least 180 points)
  • SQL (SELECT, GROUP BY, HAVING, CTEs, JOINS, window operations such as RANK, DENSE, etc.) (expert level SQL knowledge is a significant advantage)
  • Source control (git)
  • A desire to work in an agile startup environment


Nice to have

  • Experience working with and integrating a large amount of diverse external data sources and/or an interest in Open Source Intelligence
  • Exposure to descriptive analysis / able to understand how data is generated and spot patterns, errors and inconsistencies
  • Experience with bulk and streaming data processing, incl. ingesting data from both APIs and web scraping
  • Experience data processing in Python - Pandas, Dask, PySpark
  • Experience in Search, specifically Lucene - Solr / ElasticSearch
  • Some ETL/Data Orchestration tool experience such as Dagster, AirFlow, ApacheNiFi etc.
  • Experience and understanding in Data Warehouses, OLAP, OLTP, StarSchema, Snowflake etc.

Benefits

Benefits

  • Stock options
  • Enhanced parental leave
  • Private health insurance - UK only*
  • Choice of laptop
  • Flexi-working
  • £2000 travel budget
  • Company trips

*We employ people across the UK and EU (using a 3rd party Employer of Record model), and inevitably the benefits that we can offer vary slightly in different territories, due to local employment law and feasibility. Our salary range does not vary depending on territory.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow APIs Dagster Data pipelines Data quality Elasticsearch ETL Git Machine Learning NLP OLAP Open Source Pandas Pipelines PySpark Python Snowflake SQL Streaming

Perks/benefits: Career development Equity Gear Health care Parental leave Startup environment Travel

Regions: Remote/Anywhere Europe
Country: United Kingdom
Job stats:  31  5  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.