Data Engineer (Flink, Kafka, Scala)

Bengaluru, Karnataka, India - Remote

FairMoney

Digital banking and Instant Loans in Nigeria providing collateral-free personal loans, a bank account with free bank transfers, and zero convenience fee on b...

View company page

FairMoney is a pioneering mobile banking institution specializing in extending credit to emerging markets. Established in 2017, the company currently operates primarily within Nigeria, and it has secured nearly €50 million in funding from renowned global investors, including Tiger Global, DST, and Flourish Ventures. FairMoney maintains a strong international presence, with offices in several countries, including France, Nigeria, Germany, Latvia, the UK, Türkiye, and India.

In alignment with its vision, FairMoney is actively constructing the foremost mobile banking platform and point-of-sale (POS) solution tailored for emerging markets. The journey began with the introduction of a digital microcredit application exclusively available on Android and iOS devices. Today, FairMoney has significantly expanded its range of services, encompassing a comprehensive suite of financial products, such as current accounts, savings accounts, debit cards, and state-of-the-art POS solutions designed to meet the needs of both merchants and agents.


We are building Engineering centres of excellence across multiple regions and are looking for smart, talented, driven engineers. This is a unique opportunity to be part of the core engineering team of a fast-growing fintech poised for more rapid growth in the coming years.
To gain deeper insights into FairMoney's pivotal role in reshaping Africa's financial landscape, we invite you to watch this informative video. 

Role and responsibilities

As a Data Engineer at FairMoney, you will be responsible for mainly, but not limited to:

  • Work closely with Data Analysts/Scientists, understand the business problems, and translate the requirements into a database, ETL, or reporting solution.
  • Design, build, and maintain integration of heterogeneous data sources for DW & BI solutions to simplify analysis across the products.
  • Design and implement data models that align with business needs and support data warehousing best practices.
  • Contribute to the ongoing development and optimization of the data warehouse architecture.
  • Implement tools and processes for ensuring data quality, and freshness with reliable, versioned, and scalable solutions.
  • Identify issues in data flow and improvements in data stack which comprises visualization tools (Tableau), data warehouse (BigQuery), data modelling (dbt), git, and other in-house built as well as open source tools.
  • Implement best practices for indexing, partitioning, and query optimization.
  • Stay up-to-date with new technologies in Data Engineering, Analytics, and Data Science.
  • Implementing new technologies in a production environment to create a frictionless platform for data analytics and science teams to reduce turnaround time from raw data to insights and ML training/serving
  • Make it easy for business stakeholders to get a better understanding of data, and make the organization data-literate for self-serve analytics.

Requirements

3+ years of work experience in designing, developing & maintaining ETL, databases & OLAP Schema, and Public Objects (Attributes, Facts, Metrics, etc.)

Hands on experience in designing and implementing data ingestion/integration processes using SQL/Python.

Strong proficiency in SQL and experience with database systems (e.g., PostgreSQL, MySQL, or similar)

Proficient in developing automated workflows using Airflow or similar tools.

Proficiency in data principles, system and data architecture, and dimensional data modeling for Data Warehousing and Business Intelligence, and data governance principles.

Good to have development experience in building business applications with a good command of Bash, Docker, Kubernetes, and Cloud Platforms.

Ability to learn new software and technologies quickly to create prototypes for business use cases and make them ready for production.

Effectively form relationships with the business stakeholders in order to help with the adoption of data-driven decision-making.

Excellent problem-solving skills and attention to detail.

Benefits

  • Training & Development
  • Family Leave (Maternity, Paternity)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Remote Work

Recruitment Process

  • A screening interview with one of the members of the Talent Acquisition team ~30 minutes.
  • Assignment to be done at home
  • Technical interview: Data Model, Architecture and presentation of the assignment at home ~ 45-60 minutes
  • Final Interview with - Head of Data Engineering ~ 45-60 minutes

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow Architecture Banking BigQuery Business Intelligence Data Analytics Data governance Data quality Data warehouse Data Warehousing dbt Docker Engineering ETL FinTech Flink Git Kafka Kubernetes Machine Learning MySQL OLAP Open Source PostgreSQL Python Scala SQL Tableau

Perks/benefits: Parental leave

Regions: Remote/Anywhere Asia/Pacific
Country: India
Job stats:  2  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.