Senior/Lead Data Engineer - AWS Glue

Gurgaon

Srijan Technologies

Srijan is a digital experience services company that helps global Fortune 500s to nonprofits build transformative digital paths to a better future.

View company page

Location: Gurgaon,None,None

 

 

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe

 

 

 

Position Overview:

 

We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience with AWS Glue, Apache Airflow, Kafka, SQL, Python and DataOps tools and technologies. Knowledge of SAP HANA & Snowflake is a plus. This role is critical for designing, developing, and maintaining our client’s data pipeline architecture, ensuring the efficient and reliable flow of data across the organization.

 

 

Key Responsibilities:

 

  • Design, Develop, and Maintain Data Pipelines:
    • Develop robust and scalable data pipelines using AWS Glue, Apache Airflow, and other relevant technologies.
    • Integrate various data sources, including SAP HANA, Kafka, and SQL databases, to ensure seamless data flow and processing.
    • Optimize data pipelines for performance and reliability.
  • Data Management and Transformation:
    • Design and implement data transformation processes to clean, enrich, and structure data for analytical purposes.
    • Utilize SQL and Python for data extraction, transformation, and loading (ETL) tasks.
    • Ensure data quality and integrity through rigorous testing and validation processes.
  • Collaboration and Communication:
    • Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
    • Collaborate with cross-functional teams to implement DataOps practices and improve data life cycle management.
  • Monitoring and Optimization:
    • Monitor data pipeline performance and implement improvements to enhance efficiency and reduce latency.
    • Troubleshoot and resolve data-related issues, ensuring minimal disruption to data workflows.
    • Implement and manage monitoring and alerting systems to proactively identify and address potential issues.
  • Documentation and Best Practices:
    • Maintain comprehensive documentation of data pipelines, transformations, and processes.
    • Adhere to best practices in data engineering, including code versioning, testing, and deployment procedures.
    • Stay up-to-date with the latest industry trends and technologies in data engineering and DataOps.
    •  

Required Skills and Qualifications:

 

  • Technical Expertise:
    • Extensive experience with AWS Glue for data integration and transformation.
    • Proficient in Apache Airflow for workflow orchestration.
    • Strong knowledge of Kafka for real-time data streaming and processing.
    • Advanced SQL skills for querying and managing relational databases.
    • Proficiency in Python for scripting and automation tasks.
    • Experience with SAP HANA for data storage and management.
    • Familiarity with Data Ops tools and methodologies for continuous integration and delivery in data engineering.
  • Preferred Skills:
    • Knowledge of Snowflake for cloud-based data warehousing solutions.
    • Experience with other AWS data services such as Redshift, S3, and Athena.
    • Familiarity with big data technologies such as Hadoop, Spark, and Hive.
  • Soft Skills:
    • Strong analytical and problem-solving skills.
    • Excellent communication and collaboration abilities.
    • Detail-oriented with a commitment to data quality and accuracy.
    • Ability to work independently and manage multiple projects simultaneously.

 

 

Qualifications:

 

Education and Experience

  • Bachelor's or master's degree in computer science, Information Technology, or a related field.
  • 4+ years of experience in data engineering or a related role.
  • Proven track record of designing and implementing complex data pipelines and workflows.

 

Why work for Material

In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Here’s a bit about who we are and highlights around What we offer.

 

Who We Are & What We Care About:-

  • Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from 
    technology to retail, transportation, finance and healthcare.
  • Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients.
  • We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a 
    rich frame of reference to our work.
  • A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in people's lives.
     

What We Offer:-

  • Professional Development and Mentorship.
  • Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified).
  • Health and Family Insurance.
  • 40+ Leaves per year along with maternity & paternity leaves.
  • Wellness, meditation and Counselling sessions.

 

 

 

Apply to this job
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Airflow Architecture Athena AWS AWS Glue Big Data Computer Science Data management DataOps Data pipelines Data quality Data Warehousing Engineering ETL Finance Hadoop Kafka Pipelines Python RDBMS Redshift Snowflake Spark SQL Streaming Testing

Perks/benefits: Career development Health care Parental leave Startup environment

Region: Asia/Pacific
Country: India

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.