Data Engineer

London, England, United Kingdom

Applications have closed

Methods

Delivering end-to-end business & technical solutions that are people-centred, safe, & designed for the future

View company page

Methods is currently recruiting for a Data Engineer to join our team on a permanent basis. This role will include some work on client sites and frequent visits to our main offices in London. 

 

Methods

Since our establishment in 1990, Methods has partnered with a range of central government departments and agencies to transform the way the public sector operates in the UK.

Our mission is to improve and safeguard public-facing services. We apply digital thinking to ensure the future of our public services is centred around our citizens.

Our human touch sets us apart from other consultancies, system integrators and software houses - we have a customer-centric value system whereby we focus on delivering what is right for our clients.

We passionately support our clients in the success of their projects while working collaboratively to share skill sets and solve problems. At Methods we have fun while working hard; we are not afraid of making mistakes and learning from them.

Requirements

The Data Engineer will:

  • Work with other members of the delivery team across a mix of large and small projects and be responsible for translating data into valuable insights that inform decisions for small to large transformation projects and programmes.
  • Be responsible for identifying and using the most appropriate analytical techniques, developing fit-for-purpose, resilient, scalable and future-proof data services that meet user needs and design and write and iterate code from prototype to production-ready.
  • Communicate effectively across organisational, technical and political boundaries to understand the context and how to make complex and technical information and language simple and accessible for non-technical audiences.
  • Work with the others to support the growth and development of the team

Ideal Candidates will demonstrate:

  • Experience with Relational Databases and Data Warehousing experience
  • Experience of programming in languages such as Python, PySpark, R, Scala or Java
  • Experience processing large volumes of structured/semi-structured data
  • Experience producing data models and understand where to use different types of data models
  • Knowledge of data cleaning, wrangling, visualisation and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks
  • Have cloud-based experience, preferably with Azure and/or GCP
  • Have experience with ETL / ELT
    • Infrastructure, Azure Data Factory, Azure Synapse, Databricks
  • Have experience with Analytical/Reporting tools
    • Power BI, Tableau, Looker, QuickSight
  • DevOps experience, such as using Git, CI/CD and Unit Testing
  • Have attention to detail and ability to quality assure their own and other team members’ work
  • Understanding of how to expose data from systems (e.g. through APIs), link data from multiple systems and deliver streaming services.
  • Know how to ensure that risks associated with deployment are adequately understood and documented.
  • An ability to translate business requirements into technical specifications
  • Experience of iterative/agile development methodologies such as SCRUM
  • A good understanding of Data Governance principles and the safe handling and processing of Personal Identifiable Data

Desirable Skills & Experience:

  • Streaming and real-time data
  • Knowledge of statistics principles necessary to interpret data and apply models. For example, knowledge of errors and confidence intervals to understand whether a relation seen in the data is real.
  • Exposure to high performing, low latency or large volume data systems (i.e. 1 billion+ records, terabyte size database)
  • Understanding of distributed computing, columnar type databases, partitioning, map reduction
  • Big Data Frameworks experience, such as Hadoop and Apache Spark
  • Experience with NoSQL type environments, Data Lakes, Lake-Houses (Cassandra, MongoDB or Neptune)

This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected . Details of this will be discussed with you at interview. 

Benefits

Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy.

By joining us you can expect

  • Autonomy to develop and grow your skills and experience
  • Be part of exciting project work that is making a difference in society
  • Strong, inspiring and thought-provoking leadership
  • A supportive and collaborative environment

Development – access to LinkedIn Learning, a management development programme, and training

Wellness – 24/7 confidential employee assistance programme

Flexible Working – including home working and part time

Social – office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes

Time Off – 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year

Volunteering – 2 paid days per year to volunteer in our local communities or within a charity organisation

Pension – Salary Exchange Scheme with 4% employer contribution and 5% employee contribution

Discretionary Company Bonus – based on company and individual performance

Life Assurance – of 4 times base salary

Private Medical Insurance – which is non-contributory (spouse and dependants included)

Worldwide Travel Insurance – which is non-contributory (spouse and dependants included)

Enhanced Maternity and Paternity Pay

Travel – season ticket loan, cycle to work scheme

For a full list of benefits please visit our website (www.methods.co.uk/careers/benefits)





* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Azure Big Data Cassandra CI/CD Databricks Data governance Data Warehousing DevOps ELT ETL GCP Git Hadoop Java Looker MongoDB NoSQL Power BI PySpark Python QuickSight R RDBMS Scala Scrum Security Spark Statistics Streaming Tableau Testing

Perks/benefits: Career development Flex hours Flex vacation Health care Medical leave Parental leave Salary bonus Startup environment

Region: Europe
Country: United Kingdom
Job stats:  4  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.