Data Engineer-Digital Banking Kotak 811-Regional Sales

Bengaluru, Karnataka, India

Kotak Mahindra Bank

Kotak Mahindra Bank offers high interest rate savings account, low interest rate personal loan and credit cards with attractive offers. Experience the new age Personal Banking and Net Banking with Kotak Bank.

View all jobs at Kotak Mahindra Bank

Apply now Apply later

Job Title : Data Engineer

 

Job Description

As a Data Engineer, you will be part of a dynamic team responsible for developing and maintaining data pipelines, databases, and analytical systems @Kotak811 

 

  • You will work closely with other data engineers, data scientists, and business stakeholders to ensure data integrity, availability, and reliability. 
  • You should have expertise in designing, implementing, and operating stable, scalable, solutions to flow data from production systems into analytical data platform (big data tech stack) and into end-user facing applications for both real-time and batch use cases

 

 

Responsibilities

 

  1. Data Ingestion and Integration
    1. Collaborate with cross-functional teams to gather data requirements and identify data sources.
    2. Design, develop, and maintain data ingestion pipelines from various structured and unstructured sources.
    3. Implement data integration processes to ensure data quality and consistency.
    4. Monitor and troubleshoot data ingestion issues and optimise performance.
  2. Data Transformation and ETL:
    1. Translate business requirements into technical specifications (data models) 
    2. Extract, transform, and load data from diverse sources into appropriate data models and structures.
      1. Develop and maintain ETL workflows and processes using tools like Apache Spark, Apache Kafka, or other data integration frameworks.
    3. Cleanse, validate, and transform data to meet business and analytical requirements.
    4. Strong engineering mindset - build automated monitoring, alerting, self healing (restartability/graceful failures) features while building the consumption pipelines
  3. Database Management:
    1. Create and maintain scalable and efficient data storage systems, including relational databases, data warehouses, or NoSQL databases.
    2. Optimise database performance, including indexing, partitioning, and query optimization.
    3. Implement data security and access controls to ensure data privacy and compliance with industry regulations.
    4. Monitor database health, performance, and capacity, and take proactive measures to ensure optimal system operation.
  4. Data Governance and Documentation:
    1. Ensure adherence to data management policies and standards.
    2. Document data pipelines, ETL processes, and data flows to facilitate knowledge sharing and maintain data lineage.
    3. Contribute to the development and maintenance of data dictionaries, data catalogues, and metadata repositories.
    4. Participate in data quality initiatives and data governance reviews to maintain high data integrity.
  5. Collaboration and Communication:
    1. Collaborate effectively with cross-functional teams, including data scientists, business analysts, and software engineers.
    2. Communicate technical concepts and solutions to both technical and non-technical stakeholders.
    3. Participate in team meetings, code reviews, and knowledge-sharing sessions to foster a collaborative work environment.
    4. Stay up-to-date with emerging technologies, industry trends, and best practices in data engineering.

 

Qualifications

 

  1. 2-3 years’ experience with a Bachelor's Degree in Computer Science, Engineering, Technology or related field required.
  2. Proficiency in one or more programming languages such as Python, Java, or Scala.
  3. Good knowledge of Agile, SDLC/CICD practices and tools 
  4. Experience with data integration tools and frameworks such as Apache Spark, Apache Kafka, or similar.
  5. Experience is leveraging Cloud / OSS Data platform to build scalable data models and data pipelines for analytical, data science, systemic consumption use cases 
    1.  
  6. Must have good knowledge of performance tuning/optimising data processing jobs, debugging time consuming jobs.
  7. Good understanding of distributed systems
  8. Experience working extensively in multi-petabyte DW environment

Strong problem-solving skills and attention to detail.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Agile Banking Big Data Computer Science Data governance Data management Data pipelines Data quality Distributed Systems Engineering ETL Java Kafka NoSQL Pipelines Privacy Python RDBMS Scala SDLC Security Spark

Region: Asia/Pacific
Country: India

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.