Databricks/Spark/Scala Senior Data Engineer - Pipeline

Redditch, United Kingdom

Applications have closed

Version 1

Transform your business through digital technology solutions with IT Service Provider, Version 1. Modernise your IT with our Cloud-first services & ERP solutions.

View company page

Company Description

We pledge "to prove IT can make a real difference to our customer's businesses". We work hard to ensure we understand what our customers need from their technology solutions and then we deliver.

We are an award-winning company who provide world class customer service; we think big and we hire great people. Version 1 are more than just another IT services company - we are leaders in implementing and supporting Oracle, Microsoft and AWS technologies.

Invest in us and we’ll invest in you; if you are driven, committed and up for a challenge, we want to meet you.

Job Description

This is an exciting opportunity for an experienced developer of large-scale data solutions. You will join a team delivering a transformative cloud hosted data platform for a key Version 1 customer. 

The ideal candidate will have a proven track record as a senior/self-starting data engineer in implementing data ingestion and transformation pipelines for large scale organisations. We are seeking someone with deep technical skills in a variety of technologies, specifically SPARK performance\tuning\optimisation and Databricks, to play an important role in developing and delivering early proofs of concept and production implementation.

You will ideally have experience in building solutions using a variety of open source tools & Microsoft Azure services, and a proven track record in delivering high quality work to tight deadlines.

Your main responsibilities will be:

  • Designing and implementing highly performant data ingestion & transformation pipelines from multiple sources using Databricks and Spark/Scala
  • Streaming and Batch processes in Databricks
  • Providing technical guidance for complex geospatial problems and spark dataframes
  • Developing scalable and re-usable frameworks for ingestion and transformation of large data sets
  • Data quality system and process design and implementation.
  • Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
  • Working with other members of the project team to support delivery of additional project components (Reporting tools, API interfaces, Search)
  • Evaluating the performance and applicability of multiple tools against customer requirements
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
  • SPARK performance\tuning\optimisation

Qualifications

 

  • Direct experience of building data piplines using Azure Data Factory and Databricks Spark using Scala
  • Fluent in Scala, Python, Java
  • Experience working with structured and unstructured data including imaging & geospatial data.
  • Experience of working with relational databases: (SQL Server, PostgreSQL)
  • Hands on experience designing and delivering solutions using the Azure Data Analytics platform including Azure Storage, Azure SQL Database, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics
  • Experience building data warehouse solutions using ETL / ELT tools such as SQL Server Integration Services (SSIS), Oracle Data Integrator (ODI), Talend.
  • Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data / event-based data
  • Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J)
  • Comprehensive understanding of data management best practices including demonstrated experience with data profiling, sourcing, and cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching.
  • Databrick certification
  • Microsoft Azure Big Data Architecture certification.

Additional Information

Before you apply, here are some of our benefits. We offer profit share, pension, private medical, flexible working policy and more. We offer incentives for accreditations and educational assistance for courses relevant to your role.

We offer employee recognition in the form of Excellence Awards and V1Ps which is awarded by your peers. Engagement is incredibly important with local engagement teams driving our engagement events!

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Architecture AWS Azure Big Data Cassandra Cosmos DB Data Analytics Databricks Data management Data quality Data warehouse DevOps ELT ETL Java Kafka MongoDB Neo4j NiFi NoSQL Open Source Oracle Pipelines PostgreSQL Python RDBMS Scala Spark SQL SSIS Streaming Talend Unstructured data

Perks/benefits: Flex hours Team events

Region: Europe
Country: United Kingdom
Job stats:  6  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.