Lead Data Engineer ( pyspark+AWS+Redshift)

Bengaluru, Delhi, Gurgaon, Kolkata,Chennai,Hyderabad,Pune,Indore,Jaipur and Ahmadabad

Applications have closed

Srijan Technologies

Srijan is a digital experience services company that helps global Fortune 500s to nonprofits build transformative digital paths to a better future.

View company page

Location: Bengaluru, Delhi, Gurgaon, Kolkata,Chennai,Hyderabad,Pune,Indore,Jaipur and Ahmadabad,None,None

About Material

Material is a global strategy, insights, design, and technology partner to companies striving for true customer centricity and ongoing relevance in a digital first, customer-led world. By leveraging proprietary, science-based tools that enable human understanding, we inform and create customer-centric business models and experiences + deploy measurement systems – to build transformational relationships between businesses and the people they serve.

About Srijan

Srijan is a global engineering firm that builds transformative digital paths to better futures for Fortune 500 enterprises to nonprofits all over the world. Srijan brings advanced engineering capabilities and agile practices to some of the biggest names across FMCG, Aviation, Telecom, Technology, and others. We help businesses embrace the digital future with cloud, data, API and platform centric technologies and adapt to changing business models and market demands. Srijan leads in Drupal with 350+ Drupal engineers, 80+ Acquia certifications. Srijan is also a Drupal Enterprise Partner & Diamond Certified

 

What you will get:

  1. Competitive Salaries with flexi benefits 
  2. Group Mediclaim Insurance and Personal Accidental Policy
  3. 30+ Paid Leaves in a year 
  4. Learning and Development of quarterly budgets for certification
 RequirementExperience of 3-6 years, predominantly on Data Architect, Data Warehousing / Data Lake on AWS, Python, Spark, Redshift
  • Proven hands-on experience with spark and python is a must.
  • 2+ plus years of developing applications using python, Data Warehousing / Data Lake,  Consumer APIs, Real-time Data pipelines/Streaming
  • Cloud Native development is compulsory (AWS)
● experience working with Integration Platform, Datawarehouse, Data lake, and ETL/ELT Loads.
● Must be strong in coding either Java, Scala or Python worked on Integration between different sources and target systems like RDBMS(Salesforce, BB CRM, Oracle, Postgres, MySQL, SQL Server)
● Extract Data from Source Systems using APIs, Webservices, or bin log files using AWS Glue using pyspark or spark with scala.
● Must have experience in retrieving data from REST and SOAP API● Experience with AWS Redshift. ● Experience with Kafka streaming. Apply to this job

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs AWS Data pipelines Data Warehousing ELT Engineering ETL Kafka MySQL Oracle Pipelines PostgreSQL PySpark Python RDBMS Redshift Scala Spark SQL Streaming

Perks/benefits: Career development

Region: Asia/Pacific
Country: India
Job stats:  5  1  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.