Senior Data Engineer-Digital Banking Kotak 811-Regional Sales

Bengaluru, Karnataka, India

Kotak Mahindra Bank

Kotak Mahindra Bank offers high interest rate savings account, low interest rate personal loan and credit cards with attractive offers. Experience the new age Personal Banking and Net Banking with Kotak Bank.

View company page

Job Title: Senior Data Engineer

Job Description

As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. 

  • You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. 
  • Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives.

Responsibilities

  1. Data Architecture and Design:
    1. Design and develop scalable, high-performance data architecture and data models.
    2. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions.
    3. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects.
    4. Define and enforce data engineering best practices, standards, and guidelines.
  2. Data Pipeline Development & Maintenance:
    1. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases
    2. Implement ETL processes to integrate data from various sources into data storage systems.
    3. Optimise data pipelines for performance, scalability, and reliability.
      1. Identify and resolve performance bottlenecks in data pipelines and analytical systems.
      2. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions.
      3. Optimise database performance, including query tuning, indexing, and partitioning strategies.
    4.  Implement real-time and batch data processing solutions.
  3. Data Quality and Governance:
    1. Implement data quality frameworks and processes to ensure high data integrity and consistency.
    2. Design and enforce data management policies and standards.
    3. Develop and maintain documentation, data dictionaries, and metadata repositories.
    4. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies.
  4. ML Models Deployment[1]  & Management (is a plus)
    1. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments
    2. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes.
    3. Optimise model performance and latency for real-time inference in consumer applications.
    4. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment.
    5. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues.
    6. Implement monitoring and logging solutions to track model performance, data drift, and system health.
  5. Team Leadership and Mentorship:
    1. Lead data engineering projects, providing technical guidance and expertise to team members.
      1. Conduct code reviews and ensure adherence to coding standards and best practices.
    2. Mentor and coach junior data engineers, fostering their professional growth and development.
    3. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes.
    4. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team.
      1. Participate in the evaluation and selection of data engineering tools and technologies.

Qualifications:

  1. 3-5 years’ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required
  2. Good understanding of streaming technologies like Kafka, Spark Streaming.
  3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 
  4. Proficiency in one of the programming language preferably Java, Scala or Python
  5. Good knowledge of Agile, SDLC/CICD practices and tools 
  6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming.  Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs.
  7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions.
  8. Good understanding of distributed systems
  9. Experience working extensively in multi-petabyte DW environment
  10. Experience in engineering large-scale systems in a product environment

 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture Banking Business Intelligence CI/CD Computer Science Data management Data pipelines Data quality Data warehouse DevOps Distributed Systems Docker Engineering ETL Hadoop Java Kafka Kubernetes Machine Learning ML models Model deployment OLAP Pipelines Python Scala SDLC Spark Streaming

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  11  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.