Software Engineer II - Python Spark

Bengaluru, Karnataka, India

JPMorgan Chase & Co.

View company page

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a Software Engineer II at JPMorgan Chase within the Corporate & Investment Bank, Payments Technology, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

Job responsibilities

  • Executes standard software solutions, design, development, and technical troubleshooting
  • Writes secure and high-quality code using the syntax of at least one programming language with limited guidance. Collect data and build data-pipes from various data sources, including databases, APIs, external data providers, batch and streaming sources (real time). 
  • Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications. Implement efficient data pipelines to ensure a smooth flow of information into the Data Lakehouse platforms.
  • Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation. Gathering the data requirements from Business stakeholders, Product, Operations by implementing the required data pipelines through Agile process.
  • Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity. Combine raw information from different data sources and explore ways to enhance data quality and reliability. Updates logical or physical data models based on new use cases
  • Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
  • Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems. Identify opportunities for data acquisition and develop analytical tools and programs.
  • Adds to team culture of diversity, equity, inclusion, and respect. Leverage cloud platforms to build scalable and cost-effective data solutions.

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 2+ years applied experience
  • Hands-on experience in designing, developing and testing data applications using Python/ Spark with technical experience with large multi-terabyte, warehouse, and Datalake/Lakehouse systems. 
  • Experience across the data lifecycle with Batch and Real time Data processing with Spark or Flink. Experience with data analysis and ability to determine appropriate tools and data patterns to perform data analysis (Data Discovery, Data Mining & Data processing)
  • Demonstrable ability to code in one or more languages. Good understanding distributed systems and knowledge of streaming technologies like Apache Kafka to handle and analyze data.
  • Experience across the whole Software Development Life Cycle. Expert hands-on skills in PySpark / Python knowledge. Proficiency in cloud platforms like AWS & Azure and working knowledge of AWS Glue and EMR usage for Data processing. Experience working with Databricks and working with Python/Java, PySpark etc.,
  • Experience in CI/CD (Jenkins). Advanced SQL (e.g., joins and aggregations) and good understanding of NoSQL databases, Knowledge of multiple RDBMS, Data-warehouses. Expert scripting knowledge (SQL/Shell/Perl). Knowledge on Agile programming techniques such as test-driven, BDD development, spec by example, Scrum and Kanban. 
  • Exposure to agile methodologies such as CI/CD, Application Resiliency, and Security.
  • Emerging knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)

Preferred qualifications, capabilities, and skills

  • Strong skills in PySpark and Multi cloud services (AWS, Azure, GCP.etc)
  • Good to have knowledge of Databricks Lakehouse, Delta Lake, Delta live Tables (DLT). 
  • Experience in one or more of Spring boot / Micro Services / Kafka / Cassandra & API development is preferred.

JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.

We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile API Development APIs AWS AWS Glue Azure Banking Cassandra CI/CD Data analysis Databricks Data Mining Data pipelines Data quality Distributed Systems Engineering Flink GCP Java Kafka Kanban Machine Learning NoSQL Perl Pipelines PySpark Python RDBMS Scrum SDLC Security Spark SQL Streaming Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  1  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.