Senior Data Engineer

Pune, MH, IN, 411026

Lear Corporation

Driving superior in-vehicle experiences with cutting-edge automotive technology for vehicles from major automakers worldwide.

View company page

  1. Job Description: Senior Data Engineer (Palantir Foundry)

Overview:

As a Senior Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If you’re passionate about data engineering and have a track record of excellence, this role is for you!

Responsibilities:

  1. Manage Execution of Data-Focused Projects:
    • As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lear’s data analytics and application platforms.
    • Participate in projects from conception to root cause analytics and solution deployment.
    • Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline.
  2. Tools and Technologies:
    • Utilize key tools within Palantir Foundry, including:
      • Pipeline Builder: Author data pipelines using a visual interface.
      • Code Repositories: Manage code for data pipeline development.
      • Data Lineage: Visualize end-to-end data flows.
    • Leverage programmatic health checks to ensure pipeline durability.
    • Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets.
    • Mentor junior data engineers on best practices.
  3. Data Pipeline Architecture and Development:
    • Lead the design and implementation of complex data pipelines.
    • Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development.
    • Optimize data ingestion, transformation, and enrichment processes.
  4. Big Data, Dataset Creation and Maintenance:
    • Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organization’s needs.
    • Implement optimum build time to ensure effective utilization of resource.
  5. High-Quality Dataset Production:
    • Produce and maintain datasets that meet organizational needs.
    • Optimize the size and build scheduled of datasets to reflect the latest information.
    • Implement data quality health checks and validation.
  6. Collaboration and Leadership:
    • Work closely with data scientists, analysts, and operational teams.
    • Provide technical guidance and foster a collaborative environment.
    • Champion transparency and effective decision-making.
  7. Continuous Improvement:
    • Stay abreast of industry trends and emerging technologies.
    • Enhance pipeline performance, reliability, and maintainability.
    • Contribute to the evolution of Foundry’s data engineering capabilities.
  8. Compliance and data security:
    • Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them.
  9. Team Development and Collaboration:
    • Mentor junior team members and contribute to their growth.
    • Foster collaboration within cross-functional teams.
    • Share best practices and encourage knowledge sharing.
  10. Quality Assurance & Optimization:
    • Optimize data pipelines and their impact on resource utilization of downstream processes.
    • Continuously test and improve data pipeline performance and reliability.
    • Optimize system performance for all deployed resources.

Qualifications:

  • Bachelor’s or master’s degree in Computer Science, Engineering, or a related field.
  • Minimum 5 years of experience in data engineering, ETL, and data integration.
  • Proficiency in Python and libraries like Pyspark, Pandas, Numpy.
  • Strong understanding of Palantir Foundry and its capabilities.
  • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).
  • Excellent problem-solving skills and attention to detail.
  • Effective communication and leadership abilities.

 

 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture Big Data Computer Science Data Analytics Data pipelines Data quality Engineering ETL Git Hadoop Kafka NumPy Pandas Pipelines PySpark Python Security Spark

Perks/benefits: Transparency

Region: Asia/Pacific
Country: India
Job stats:  2  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.