ETL Informatica Developer

India - Bengaluru

Applications have closed

Capco

Capco is a global management and technology consultancy dedicated to the financial services and energy industries.

View company page

Role Description:

Role Overview: Data/ETL developer to support the Relationship Analytics scrum team within the Jefferies "Madison" program (CRM, Data Analytics, and Business Intelligence). The role will involve developing and enhancing data mappings for the Jefferies knowledge graph to empower several applications in the firm - including a proprietary relationship/connection analytics app for Investment Bankers and Sales Traders. You will work in close collaboration with Jefferies graph data engineers, data scientists, software engineers, and product managers to build various business cases. You will be also responsible for cleansing, analyzing and enhancing the data quality in the current application.

Responsibilities:

  • Design, develop and unit test data mappings and workflows using Informatica tools (PowerCenter, Repository Manager, Workflow Manager, etc.) to extract, transform and load data from various source systems (SQL databases) to different target environments (databases, AWS S3 files, etc.)
  • Tune, optimize and troubleshooting ETL batch jobs for performance and throughput in production, QA and UAT
  • Ability to work in an Agile environment (2-week sprints)
  • Create technical design documents
  • Collaborate with full-dev life-cycle team including but not limited to Data Engineering, Data Scientists, Automating Engineers, Infrastructure Engineers, Quality Engineers and Front-end Engineers

Basic Requirements:

  • Bachelor’s degree or higher in Computer Science or similar academic discipline
  • 3+ years of strong Experience with Informatica
  • Sound experience with SQL, proficient with writing SQL Queries etc.
  • Data extraction programming, source to destination mapping, data preparation
  • Familiarity with ETL and relevant data quality checks
  • Excellent problem solving and critical thinking skills
  • Experience working with cloud environment setup (AWS, Azure, GCP)

Preferred Skills:

  • Proficiency in programming languages, like Java, Python, and Scala
  • Experience with Record Linkage, Data Matching and Entity resolution
  • Experience with graph technologies, e.g. RDF, RDFs, SPARQL, SHACL, Triple Stores (Allegrograph, Neptune, GraphDB, etc.), Property Graph Stores (Neo4J, TigerGraph, etc.)
  • Familiarity with full stack development lifecycle including Integration and UI development
  • Experience with Spark and Databricks

Tags: Agile AWS Azure Business Intelligence Computer Science Data Analytics Databricks Data quality Engineering ETL GCP Informatica Neo4j Python RDF Scala Scrum Spark SQL

Region: Asia/Pacific
Country: India
Job stats:  12  6  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.