Data Engineer

Chennai, Tamil Nadu, India

Ford Motor Company

Since 1903, we have helped to build a better world for the people and communities that we serve. Welcome to Ford Motor Company.

View company page

This individual will be responsible for creating products to host Supply Chain Analytics algorithms. Looking for someone with full stack experience, even if the specialty is in one area. This person should be a have 5-7+ years of experience with software engineering and testing, experience working in an Agile Environment, and experience with Rally.  This person will have interactions with Ford leadership and needs to have good communication (both written and oral) and should feel comfortable not waiting and being told what to do. 

  • Develop EL/ELT/ETL pipelines to make data available in BigQuery analytical data store from disparate batch, streaming data sources for the Business Intelligence and Analytics teams
  • Work with on-prem data sources (Hadoop, SQL Server), understand the data model, business rules behind the data and build data pipelines (with GCP) for one or more Ford verticals. This data will be landed in GCP BigQuery.
  • Build cloud-native services and APIs to support and expose data-driven solutions.
  • Partner closely with our data scientists to ensure the right data is made available in a timely manner to deliver compelling and insightful solutions.
  • Design, build and launch shared data services to be leveraged by the internal and external partner developer community.
  • Building out scalable data pipelines and choosing the right tools for the right job. Manage, optimize and Monitor data pipelines.
  • Provide extensive technical, strategic advice and guidance to key stakeholders around data transformation efforts. Understand how data is useful to the enterprise.

Required Skills:

  • Bachelor's degree in Computer Science, Computer Engineering, Data Science, Analytics, or related field or a combination of education and equivalent experience.
  • 3+ years of experience with SQL, Python & Java.
  • 4+ years of experience with GCP cloud services (Dataflow, Big Query & Pub Sub)
  • 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner.

Desired Skills:

  • Experience with GCP cloud services including BigQuery, Cloud Composer, Dataflow, CloudSQL, GCS, Cloud Functions and Pub/Sub.
  • 1+ year experience with Hive, Spark, Scala, JavaScript.
  • Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse).
  • Inquisitive, proactive, and interested in learning new tools and techniques.
  • Familiarity with big data and machine learning tools and platforms. Comfortable with open-source technologies including Apache Spark, Hadoop, Kafka.
  • Strong oral, written and interpersonal communication skills.
  • Comfortable working in a dynamic environment where problems are not always well-defined.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Big Data BigQuery Business Intelligence Computer Science Dataflow Data pipelines Data warehouse ELT Engineering ETL GCP Hadoop Java JavaScript Kafka Machine Learning Open Source Pipelines Python Scala Spark SQL Streaming Testing

Region: Asia/Pacific
Country: India
Job stats:  2  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.