KR61-EU Senior Data Engineer

Remote Europe

Cybernetic Controls Limited

At Cybernetic Controls Limited, we provide cutting-edge remote hiring services in FinTech, software engineering, DevOps, cloud computing, UI/UX design, and cybersecurity services to enable your business. Transform your team with the best talent...

View company page

KR61-EU Senior Data Engineer

Department: Data

Employment Type: Full Time

Location: Remote Europe

Reporting To: Rishabh Gupta

Description

Overview At Cybernetic Controls Limited (CCL), we are committed to being a global leader in providing innovative digital solutions that empower businesses to reach their full potential. As a remote-first company, we believe in empowering our employees to work in a way that best suits their individual needs, fostering a culture of flexibility and trust. Since our founding in 2020, we have successfully delivered high-quality resources to our clients in the FinTech sector across various business areas. Our Client: We are a multi-award winning RegTech company on a mission to transform the quality of regulatory reporting in the financial services industry. We’ve combined regulatory expertise with advanced technology to develop our market-leading quality assurance services. Unique in being able to fully assess data quality, our services are used by some of the world’s largest investment banks, asset managers, hedge funds and brokers, helping them to reduce costs, improve quality and increase confidence in their regulatory reporting. Job summary Our client is seeking a Senior Data Engineer to join our fast-growing team. The successful candidate will join the testing team to work on ETL and development tasks. This is an exciting and challenging opportunity to build out new pipelines combining and processing large amounts of structured data from a variety of sources with the power of PySpark at your fingertips.

Key Responsibilities

  • Architect and build pipelines using AWS cloud computing solutions that make data available with robustness, maintainability, efficiency, scalability, availability and security. 
  • Develop Python and/or Spark (Preferably PySpark, but Spark-Scala is also good to have) code that implements complex data transformations. 
  • Design and maintain databases and APIs for storage and transmission of data between applications. 
  • Monitor pipelines in production (and develop tools to facilitate this). 
  • Work collaboratively with other team members (brainstorming, troubleshooting, and code review). 
  • Liaise with other development teams to ensure the integrity of data pipelines.

Skills, Knowledge & Expertise

Skills:
  • Excellent Python and PySpark programming (including Pandas/PySpark dataframes, web and database connections) 
  • Excellent understanding of ETL processes within Amazon Web Services (AWS) 
  • Apache Spark, AWS Glue, Athena, S3, Step Functions, Lake Formation 
  • Software development lifecycle best practices 
  • Test-driven development 
  • Serverless computing (AWS Lambda, API Gateway, SQS, SNS, EventBridge, S3, etc.) 
  • SQL and NoSQL database design and management (DynamoDB, MySQL) 
  • Strong SQL coding skills (Spark-SQL, Presto SQL, MySQL, etc.) 
  • Infrastructure as code (CloudFormation) 
  • Experience in Shell Scripting (preferably Linux) 
  • Version control with Git/Github 
  • Agile principles, processes and tools
  • Excellent written and verbal communication skills. 
Experience: 
  • Designing, deploying and managing complex production data pipelines that interact with a range of data sources (file systems, web, database, users)
  • Strong experience with Amazon Web Services (AWS) 
  • 5 years’ work in data engineering field
  • At least 3 years' experience with PySpark and AWS data tools (particularly, Glue)
Knowledge: 
  • Data modeling, data pipeline architecture, Big Data implementation. 
  • Software development lifecycle best practices. 
  • Financial knowledge would be an asset. Qualifications/Training: 
  • Bachelor’s degree or equivalent in Computer Science or a related subject. 

What you’ll get in return

  • Competitive salary package 
  • Private healthcare contribution 
  • Annual pay review 
  • Regular team socials 
  • Working within a culture of innovation and collaboration 
  • Opportunity to play a key role in a pioneering growth company
  • Company Laptop will be provided
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Architecture Athena AWS AWS Glue Big Data CloudFormation Computer Science Data pipelines Data quality DynamoDB Engineering ETL FinTech Git GitHub Lake Formation Lambda Linux MySQL NoSQL Pandas Pipelines PySpark Python Scala Security Shell scripting Spark SQL Step Functions TDD Testing

Perks/benefits: Competitive pay Gear Startup environment

Regions: Remote/Anywhere Europe
Job stats:  21  6  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.