Machine Learning Engineer

Remote

Applications have closed

Utility Warehouse

Get all your home services, including gas, electricity, broadband, mobile and home insurance from one company, helping you save time and money.

View company page

We are looking for experienced and innovative Machine Learning Engineers who are comfortable with diverse technology, everything from data transformations in our data warehouse, coding and deploying machine learning pipelines to the continuous improvement of our MLOps Framework. We are looking for talented people who can demonstrate strong ownership resulting in high quality and valuable machine learning deployments and data products. Machine Learning Engineers at Utility Warehouse are expected to play a leading role and to enjoy the challenges that come from being the UK’s only multiservice utility provider. We are developing a diverse team of people who each bring their different strengths and approaches to our working community.
What we are working on
Our mission with data is to create value and impact for Utility Warehouse through scaling AI and automation across our organisation.  For example, we are exploring our ability to drive product growth through machine learning, understanding our customers’ needs and improving customer interactions with UW using natural language processing. We are also using machine learning models in our partners app to help equip our partners with the information they need to expand our customer base. Our exciting AI and machine learning journey is under way and we are looking for motivated individuals to join us.
What you’ll be doing
- Designing and improving our MLOps Framework  (ML tools, packages,.. )- Developing the analytics and machine learning training and prediction infrastructure. - Partnering with Data Scientists to develop and implement Machine Learning Models.- Uniting with Software Engineering teams to define and develop best practices.- Designing and building scalable, reliable and robust data pipelines to acquire, ingest, and process data from multiple sources.- Data modelling - bringing a structure to raw data that is aligned with business requirements and objectives. - Maintaining a data product mindset - what needs to be improved next?- Helping to create clarity by reducing unnecessary complexity. - Partner with stakeholders to understand their needs and build impactful analytics solutions.- Evaluating and proposing multiple approaches to a problem, identifying the wider business implications of each.- Sharing your knowledge and recent progress at the weekly Data Tribe meeting.- Contributing to cross-functional problem-solving sessions. 
What we are looking for
Solid object-oriented programming skills in languages like Pythong.- Experience with data pipelining frameworks (e.g. Kedro, Airflow), MLOps platforms (GCP Vertex, AWS SageMaker). - Experience in Machine Learning concepts.- Experience writing high quality and maintainable code resulting in solid and reliable software.- Experience in data warehousing and building ETL/ELT pipeline. - Strong SQL and data modelling skills.- Strong analytical and problem-solving skills. - Ability to continually optimise and improve data pipelines. - Strong interpersonal skills incorporating leadership, mentoring, team working, knowledge sharing and helping to create a positive culture. - Ability to prioritise and organise own workload in coordination with the team and stakeholders.
Don’t worry if you don’t have the whole list. If you feel you have most of it and can learn the rest pretty quickly then please don’t hesitate to apply. Overall we are looking for imaginative and pragmatic problem-solvers who want to help make a positive impact with data at UW.
Technology at UW 
- Our philosophy revolves around getting things done. We deliver with small, fully autonomous teams that have real ownership of their products using a cutting edge stack in a best-idea-wins meritocracy. - We use the best tool for the job. You’ll have the freedom to explore and trial new tools and methods to see if there is a better option than at present. - We deploy to Kubernetes across AWS and GCP with some help from Terraform. - We run a mature event-sourced microservice architecture using Kafka, NATS and gRPC. As much as possible we code in the open on GitHub. If it can be open-sourced, it shall. If we can submit a PR instead of building our own, we will. 
About Utility Warehouse 
Utility Warehouse is an award-winning multi-utility provider, our core offering is simple: all your utilities, one monthly bill. Our network of 45,000 partners adds the human touch to our delivery, helps us get the word out to our existing members and bring in new ones. We may be a FTSE 250 Company but we are far from a corporate monolith, we don't dictate or micromanage, we want our teams to be autonomous, proactive and self-organising.​ We do love open source and are actively encouraging contributions back to the community. You can check our labs where we maintain fully supported open-source projects at https://github.com/uw-labs  and some code in the open repositories at https://github.com/utilitywarehouse  
We’ve got the culture and focus of the earliest stage startups with the resources and strength of a loved and profitable brand. Our eyes are on meteoric growth over the next few years. All the pieces to make it happen are here, we need you to help us put them together. 
At Utility Warehouse, we are committed to hiring and cultivating a diverse team. If you are on the fence about whether you would be successful working with us, please apply anyway!

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow AWS Data pipelines Data Warehousing ELT Engineering ETL GCP GitHub Kafka Kubernetes Machine Learning ML models MLOps NLP OOP Open Source Pipelines SageMaker SQL Terraform

Perks/benefits: Career development Startup environment

Region: Remote/Anywhere
Job stats:  19  4  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.