Data Engineer (Gojoko Technologies)

Noida, Uttar Pradesh, India

Applications have closed

Amplifi Capital

Amplifi Capital UK. We grow credit unions.Ā We work withĀ the Ā boardsĀ and members of credit unions to deliver sustainable growth for their organisationĀ and improve the financial well-being of their members.

View company page

šŸ¤© Data Engineer

Building, enhancing and growing our data infrastructure needed to drive decision science and analytics functions.


šŸ¤” Who are we?

We are a fast-growing UK-based Fintech scale-up offering full range of financial products (personal loans, credit cards, savings etc.) at the best rates to the UK retail consumers. With our two brands ā€“ My Community Finance and Reevo, we have already touched the lives of over 75,000 customers, and extended Ā£0.5bn (Rs 5,000cr) in credit. We have already doubled our balance sheet to Ā£350m (Rs 3,500cr) in 2022 and are currently one of the Top 5 lenders in the UK unsecured personal loans market. Backed by a $15bn hedge-fund, we have ambitious growth plans in the next couple of years involving launching 3 new products and doubling our portfolio size becoming the biggest player in the UK near-prime market.

As we continue our lightning-fast growth, we believe that two things are essential to continuing our success: our people and our culture. With a team from UBS, Credit Suisse, Capital One and Barclays we are over 135 people spread over 3 locations ā€“ London, Leeds and Noida, with a culture focused around ownership, collaborationā€¦ and fun! We are looking for brilliant people that love building great products, love redefining success, and love turning the complexity of a chaotic world into the simplicity of a beautiful solution.

The Noida Centre of Excellence is a rapidly growing team of IT, Data, Credit Risk and Analytics experts that is integral to driving the companyā€™s growth objectives.


šŸ™Œ. Why do we do what we do?

We are all passionate about this industry and our aim is to grow a select group of Community Lenders into network challenger banks offering a viable alternative to high-cost lenders.


šŸ‘€ Sounds great right? What will you be doing?

At Gojoko, weā€™re passionate about the power of data to drive innovation and decision making at all levels of organization. Weā€™re building a robust, scalable data infrastructure entirely hosted on AWS.

As a Data Engineer will be part of the Engineering team responsible for building, enhancing and growing our data infrastructure needed to drive decision science and analytics functions. We are looking for someone with enthusiasm for latest data engineering and big data technologies and can thrive in a fast-paced environment.


Responsibilities

ā€¢ Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Python, AWS services like data pipelines, glue job and big data technologies etc.

ā€¢ Prepare data for predictive and inference modelling and to discover tasks that can be automated. Build and maintain optimal data pipelines in AWS.

ā€¢ Assemble large, complex data sets that meet business requirements.

ā€¢ Keep our data separated and secure across boundaries.

ā€¢ Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

ā€¢ Create and deploy data tools for decision science and analytics team members that assist them in building and optimizing our business products into an innovative industry leader.

Requirements

ā€¢ 3-5 years of experience in data engineering using RDMS and building ETL pipelines.

ā€¢ Python Development (pandas, pyspark, numpy)

ā€¢ Background in cloud-based data-warehouse concepts and designs using AWS.

ā€¢ SQL

ā€¢ Experience with Git.

ā€¢ Experience in developing data solutions in an Agile environment.

ā€¢ ETL/ELT with AWS Glue, AWS Data Pipeline, AWS Athena/Redshift Spectrum etc.

ā€¢ AWS EMR with spark/jupyter or any other big data technologies.

ā€¢ Knowledge of messaging, event, or streaming technology such as Apache Kafka, and of big data platforms such as Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J, DBT and Hadoop will be advantageous.

ā€¢ Strong communication skills, empathy, and initiative.

Benefits

  • Competitive salary
  • 25 days annual leave
  • Hybrid working model
  • Gratuity (as applicable)
  • Medical insurance
  • Sociable company with We Work infrastructure.
  • Thursday snacks for employees

* Salary range is an estimate based on our AI, ML, Data Science Salary Index šŸ’°

Tags: Agile Athena AWS AWS Glue Big Data Credit risk Data pipelines ELT Engineering ETL Finance FinTech Git Hadoop Jupyter Kafka MongoDB Neo4j NumPy Pandas Pipelines PostgreSQL PySpark Python Redshift Snowflake Spark SQL Streaming

Perks/benefits: Career development Competitive pay Medical leave Startup environment

Region: Asia/Pacific
Country: India
Job stats:  13  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.