Machine Learning Operations Engineer
Remote job
Applications have closed
Trilateral Research
Ethical AI solutions to address child exploitation, human security, human trafficking, air quality monitoring, community safeguarding, crisis management, sustainability, information security, ESG, public health and public safety.Trilateral Research
We are an innovative company bringing the rigour of inter-disciplinary research to solve complex societal problems. Our projects help users make decisions when tackling complex social challenges such as protecting civilians during crisis, preventing child exploitation and modern slavery. If these are the kinds of challenges you find inspiring, and you have a passion for solving problems that make a real and positive impact in the world, come talk to us.
As a Machine Learning Operations Engineer, you will join the Sociotech for Good business unit in providing ethical AI solutions to customers in the public and private sector. Within this role you will support the advancement of our CESIUM and STRIAD applications, ensuring the development of ML models in a scalable and robust way for production. You will optimise the models developed from a coding and cloud infrastructure side and deploy these into production. You will also support DevOps and system infrastructure as required and contribute to the Data Engineering tribe.
Production-ready Machine Learning via ML Ops & DevOps Support
- Analysing and optimizing machine learning codebases to improve training and inference performance
- Managing the production lifecycle of ML models from initial deployment, to testing and updating of the next iteration.
- Building and maintaining common libraries and frameworks relied upon by the data science team to train, evaluate and deploy models.
- Continue to improve the data pipeline
- Collaborate with the data science team as required.
- Participate in planning efforts, estimation, and peer code-reviews, ensuring high-quality code standards and test-covered development
- Continuously work to improve systems' efficiency and maintainability, reducing technical debt and following best practices
- Support operational delivery and maintenance of products and projects as required.
Coding & documentation
- Write high quality, maintainable code.
- Participate in and support peer-code review process
- Provide clear and useful technical documentation, with details around risks and potential mitigation actions, to the correct standards.
Team support and development
- Support internal training and the development of engineering expertise across the company via participation in the data engineering tribe.
- Where agreed, provide mentorship and/or coaching to more junior team members particularly in skills development.
Requirements
Interpersonal Skills:
- Strong organizational, verbal, and written communication and collaboration skills
- Willingness to learn, knowledge share, and improve
- Ability to build credible relationships and influence others
- Ability to display a solution and customer focused mindset
General Knowledge and Technical Skills:
- Strong experience writing production level code in Python.
- Experience with AWS: EC2, EKS, Lambda, S3, SageMaker, ECS.
- Experience working with containerized applications (Docker / Kubernetes)
- Hands-on experience with MLOps tools, such as: Kubeflow, MLFLow.
- Strong experience with Architecture as code: Terraform, CloudFormation.
- Ability to set priorities, focus and to take ownership and drive a task to conclusion without supervision.
- Demonstrable experience in working on collaborative software projects and knowledge of clean software architecture principles
- Experience with technical documentation writing to industry standard.
- Baseline knowledge of engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
- Ability and willingness to discover, evaluate, use, and learn new technologies.
- Experienced working following agile framework and scrum methods
Required Education and Experience:
Essential:
- 2+ years’ experience ML Ops Engineering in a SaaS environment.
Desirable:
- Masters in computer science, engineering or related subject.
Location: This position is open to candidates based both in and outside of the UK
Contract type: Permanent, full-time Employment Contract
Salary: Commensurate with experience
Hours: Full time
In return, you get ...
- Flexible working hours
- Competitive pension scheme
- Remote working/working from home options
- A friendly and enthusiastic team of experts in the field
To Apply: Please submit both your CV and a cover letter, linking your experience to our requirements in order to have your application considered. References will be required prior to appointment.
We are an Equal Opportunities employer and positively encourage applications from suitably qualified and eligible candidates, regardless of their age, sex, race, disability, sexual orientation, gender reassignment, religion or belief, marital/civil partnership status, or pregnancy and maternity. We are a Disability Confident committed and Living Wage employer.
At Trilateral Research, we value privacy and data protection rights. We have a longstanding data protection culture and promote robust ethical standards in data management and research ethics. Please read our Recruitment Privacy Notice in relation to our recruitment activities before submitting your application to work with us.
Tags: Agile Architecture AWS Computer Science Data management DevOps Docker EC2 ECS Engineering Kubeflow Kubernetes Lambda Machine Learning MLFlow ML models MLOps Privacy Python Research SageMaker Scrum Terraform Testing
Perks/benefits: Career development Competitive pay Equity Flex hours Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open MLOps Engineer jobs
- Open Data Science Manager jobs
- Open Lead Data Analyst jobs
- Open Data Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Power BI Developer jobs
- Open Sr Data Engineer jobs
- Open Principal Data Engineer jobs
- Open Business Intelligence Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Data Scientist II jobs
- Open Sr. Data Scientist jobs
- Open Senior Data Architect jobs
- Open Business Data Analyst jobs
- Open Data Analyst Intern jobs
- Open Big Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Data Product Manager jobs
- Open Junior Data Engineer jobs
- Open Data Quality Analyst jobs
- Open Azure Data Engineer jobs
- Open Principal Data Scientist jobs
- Open GCP-related jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open Java-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open Deep Learning-related jobs
- Open PhD-related jobs
- Open APIs-related jobs
- Open TensorFlow-related jobs
- Open PyTorch-related jobs
- Open NLP-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open CI/CD-related jobs
- Open LLMs-related jobs
- Open Kubernetes-related jobs
- Open Generative AI-related jobs
- Open Data governance-related jobs
- Open Hadoop-related jobs
- Open Airflow-related jobs
- Open Docker-related jobs