Data Engineer

London, England, United Kingdom

Applications have closed

Accellor

A global consulting firm that aligns people, platforms, and processes to transformational goals, and drives continuous evolution through our Digital Continuum framework.

View company page

At Accellor, we are a trusted Solution Partner that uses best-of-breed Cloud technology to deliver superior customer engagement and business effectiveness for clients. We bring a deep understanding of Financial, Retail, High Tech, Healthcare, and retail verticals.

We’ve created an atmosphere that encourages curiosity, constant learning, and persistence. We encourage our employees to grow and explore their interests. We cultivate an environment of collaboration, autonomy, and delegation – we know our people have a strong work ethic and a sense of pride and ownership over their work. They are passionate, eager, and motivated – focused on building the perfect solution but never losing sight of the bigger picture.

We are seeking an experienced and dynamic Data Engineer - this role will lead solutioning and implementation of data and integration needs of digital & analytical platforms. You will work closely with other technology partners to build highly available, scalable and distributed data platforms using open-source frameworks to process high volume, high velocity, and wide variety of structured and unstructured data. Strong focus on defining and executing data solutions with cloud (AWS) first strategy to deliver unparalleled consumer experience.


Other responsibilities include designing and delivering data engineering solutions, ETL/ELT pipelines on multi-terabyte SQL/NoSQL databases, data warehouse environments. Strong communication skills with an ability to help business define requirements, articulate business requirements as technical architecture is also essential to this role.

Responsibilities:

  • Leads end to end implementation of data & analytics solutions as per business needs.
  • Delivers solutions & integrations across on-prem & multi-cloud data & analytics platforms.
  • Designs & builds frameworks for various common data engineering processes like ingestions, mastering, data cleansing, and data masking.
  • Helps grow and mentors team members on the fine art of data engineering and software abstractions.
  • Deliver data architecture and solutions for critical strategic portfolio projects in collaboration with business and IT stakeholders.
  • Ability to collaborate and partner with business domain leads, data scientists, product owners, enterprise architects and other functional leads to deliver world class data solutions.
  • Manage technical delivery for projects, designs processes, manage and drive execution with help of consultants and associates.
  • Define and deliver consistent data engineering standards, methodologies, guidelines & techniques.
  • Diagnose and address complex problems, including performance issues, scale and drive to resolution to meet business initiatives.
  • Assist data analysts and end users across all functional areas in identifying long-term, strategic data needs for the enterprise, defining how the data is related to the business processes and developing a conceptual and functional model of the data.
  • Assist and advocate for bringing new technologies for data management, governance, and usage.

Requirements

  • Bachelor’s degree in computer science, Technology, or related field.
  • 8+ years of data engineering experience.
  • 6+ years of experience with SQL or NoSQL databases like Snowflake, Postgres, Oracle, MongoDB
  • 4+ years of experience with cloud platforms like AWS, Azure, Google
  • Certification in cloud platforms and/or data/analytics technology.
  • Minimum 5 years of experience with ETL/ELT design and development using tools like IICS, AWS Glue, Talend, Databricks, Oracle Data Integrator (ODI) or equivalent.
  • Minimum 3 years of experience with programming languages such as Node.js, Python, Java or Scala.
  • Minimum 2 years of experience with building applicating with serverless architecture on AWS platforms using Athena, Glue, Kafka, EC2, Lamda, Kenesis.
  • Minimum 2 years of experience with large-scale data processing platforms such as Spark, EMR, and/or HPC computing experience with e.g. Apache Aurora, Slurm.
  • Experience working with RESTful API and general service-oriented architectures.
  • Experience with DevOps, Continuous Integration and Continuous Delivery technologies is desirable.
  • Automation scripting experience is required.
  • Excellent verbal and written communication skills.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: APIs Architecture Athena AWS AWS Glue Azure Computer Science Databricks Data management Data warehouse DevOps EC2 ELT Engineering ETL HPC Java Kafka MongoDB Node.js NoSQL Open Source Oracle Pipelines PostgreSQL Python Scala Snowflake Spark SQL Talend Unstructured data

Perks/benefits: Career development

Region: Europe
Country: United Kingdom
Job stats:  5  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.