Data Ops Engineer

London, England, United Kingdom

Applications have closed

LMAX Group

LMAX Group is a global financial technology company, which operates a leading institutional exchange for FX and delivers transparent, fair, precise and consistent execution a level playing field to all market participants.

View company page

Background

LMAX Group is a FinTech company that designs, develops, builds and runs leading edge financial exchanges in major financial centres around the world, specialising in fiat and crypto-currencies.

Our Data Operations team is responsible for the handling and storage of the company’s data, supporting our business intelligence and data analytics platforms. They bridge the gap between Data Scientists, Software Engineers, and Infrastructure Engineers.

We're looking for an energetic engineer to join a very small team managing a platform of increasing business importance. We are given a considerable amount of autonomy in designing LMAX’s data analytics platform, and choosing the technologies that go into it. We’re expected to have a lightning fast time to market and do intra-day deployments – our platform is used constantly to inform business decisions.


This should describe you

  • A flexible, interdisciplinary engineer, continuously learning and testing new technologies.
  • You're comfortable in a fast paced and agile development environment, and highly value CI/CD methodologies and practices.

You will have these key skills

  • Coding and scripting experience (Bash, Python, Java, Go)
  • Worked with Infrastructure-as-Code practices and tooling, and in a DevOps culture (Puppet, Ansible, Terraform, GitLab CI)
  • Familiar with "higher level" DevOps tooling & containers (Docker, Nomad, Terraform, Kubernetes)
  • Experience with different types of databases; relational (MySQL) and time series (InfluxDB, QuestDB)

Requirements

You will bring some of these skills, but more importantly you're interested in some of the others

  • Familiar with Data Analytics tooling and concepts (ETL/ELT, Metabase, Jupyter, Superset)
  • Familiar with the concepts of data modelling, data warehousing, data lakes
  • Data-driven monitoring and observability (Grafana, InfluxDB, Splunk, Prometheus, etc)
  • Familiar with Cloud technology (AWS)
  • DBA experience / database query tuning
  • Supporting CI/CD delivery pipelines (GitLab, Jenkins, etc)
  • Linux skills – building, configuring, monitoring, automating (CentOS, Fedora)


What you and the team will be doing

  • Support our data repositories, pipelines, tooling, and presentation layers.
  • Work with Systems Engineers to maintain our data analytics platform.
  • Work with Data Scientists and Software engineers to research or develop the tools necessary to process our data.
  • Design and build infrastructure for data storage and processing.
  • Proactive monitoring of systems and ingestion pipelines, identifying and addressing bottlenecks.
  • Automate software deployments and upgrades.
  • Troubleshooting and tuning database read / write workloads.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Ansible AWS Business Intelligence CI/CD Crypto Data Analytics DataOps Data Warehousing DevOps Docker ELT ETL FinTech GitLab Grafana InfluxDB Java Jupyter Kubernetes Linux Metabase MySQL Pipelines Python Research Splunk Superset Terraform Testing

Region: Europe
Country: United Kingdom
Job stats:  7  2  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.