Data Engineer - Advanced Analytics

Lewisville, TX, US, 75067

PACCAR

PACCAR is a global leader in the design, manufacture and customer support of high-quality premium trucks.

View company page

Company Information

PACCAR is a Fortune 500 company established in 1905. PACCAR Inc is recognized as a global leader in the commercial vehicle, financial, and customer service fields with internationally recognized brands such as Kenworth, Peterbilt, and DAF trucks. PACCAR is a global technology leader in the design, manufacture and customer support of premium light-, medium- and heavy-duty trucks under the Kenworth, Peterbilt and DAF nameplates and also provides customized financial services, information technology and truck parts related to its principal business.

 

Whether you want to design the transportation technology of tomorrow, support the staff functions of a dynamic, international leader, or build our excellent products and services, you can develop the career you desire with PACCAR. Get started!
 

Requisition Summary

Join the innovative PACCAR Global Quality team as a Data Engineer, where your role will be central to advancing product reliability through the application of continuous improvements, advanced analytics, and cloud-based solutions. Your work will directly contribute to PACCAR's immediate and strategic future, reinforcing our dedication to setting a high standard within our team. We foster a collaborative workspace that highly values the exchange of knowledge, as well as the infusion of fresh perspectives.

Embrace the chance to become an integral part of our dedicated and growing team, where you can shape the future of quality and innovation at PACCAR. This role is based out of the PACCAR Corporate Office in Lewisville, TX. 

 

The Data Engineer is responsible for developing and implementing data-driven solutions, designing data pipelines, and leveraging advanced technologies for valuable insights from large datasets. They support multiple teams and systems, optimizing PACCAR's data architecture for future products and initiatives. They collaborate with the Global Quality Team's Data Analyst and Data Scientist, ensuring consistent and optimal data delivery architecture. This highly technical position requires expertise in programming, mathematics, and computer science.

Job Functions / Responsibilities

  • Build and support automated data pipelines from a wide range of data sources using an AWS, Informatica, Attunity, and Snowflake based technology stack with an emphasis on automation and scale
  • Contribute to overall architecture, framework, and design patterns to store and process high data volumes
  • Develop solutions to measure, improve, and monitor data quality based on business requirements
  • Ensure product and technical features are delivered to spec and on-time
  • Design and implement reporting and analytics features in collaboration with product owners, analysts, and business partners within an Agile / Scrum methodology using tools like Tableau, SumoLogic, and PowerBI
  • Proactively support product health by building solutions that are automated, scalable, and sustainable – be relentlessly focused on minimizing defects and technical debt
  • Provide post-implementation production support for data pipelines
  • Partner with the business unit organizations to understand their data engineering needs, develop use cases, generate processes, and develop overall solution requirements
  • Travels to PACCAR’s Divisions locations in support of the above activities (up to 30%)

Qualifications & Skills

Required:

  • 1+ years of experience in a data engineering role with Informatica, Attunity, Snowflake and AWS and related services (e.g., EC2, S3, SNS, Lambda, IAM)
  • Proficient in dimensional modeling, ETL (including SSIS), reporting tools, data warehousing (Redshift & Snowflake), structured and unstructured data, relational databases (SQL Server & PostgreSQL), graph databases (Neo4J), and NoSQL databases (MongoDB & DynamoDB)
  • Significant experience with, or deep understanding of, programming (Python, SQL) and big data (Scala, Spark)
  • Must be a highly motivated self-starter and self-directed learner, with high attention to detail and a proven track record of participating in projects in a highly collaborative, multi-disciplinary team environment
  • Candidate must demonstrate strong analytical skills, excellent general business acumen, with the ability to concisely and clearly communicate complex topics
  • Business Intelligence solutions (including Tableau and Power BI) to create reports and dashboards to get insights from data
  • Masters' degree in Computer Science or a related field required

 

Preferred:

  • Experience with Continuous Integration/Deployment using Git Actions, Terraform, CloudFormation and AWS CodePipeline
  • Experience with shell scripting and container deployments (Docker, Kubernetes, ECS)
  • Experience with the operationalization and maintenance of analytics APIs using Plumber, Flask, Swagger and similar
  • Experience using GPT and large language models (LLMs), as well as Agile development methodology

Benefits

As a U.S. PACCAR employee, you have a full range of benefit options including:

  • 401k with up to a 5% company match
  • Fully funded pension plan that provides monthly benefits after retirement
  • Comprehensive paid time off – minimum of 10 paid vacation days (additional days are provided with additional seniority/years of service), 12 paid holidays, and sick time
  • Tuition reimbursement for continued education
  • Medical, dental, and vision plans for you and your family
  • Flexible spending accounts (FSA) and health savings account (HSA)
  • Paid short-and long-term disability programs
  • Life and accidental death and dismemberment insurance
  • EAP services including wellness plans, estate planning, financial counseling and more
  • This position is also eligible for a holiday gift.
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile APIs Architecture AWS Big Data Business Intelligence CloudFormation Computer Science Data pipelines Data quality Data Warehousing Docker DynamoDB EC2 ECS Engineering ETL Flask Git GPT Informatica Kubernetes Lambda LLMs Mathematics MongoDB Neo4j NoSQL Pipelines PostgreSQL Power BI Python RDBMS Redshift Scala Scrum Shell scripting Snowflake Spark SQL SSIS Tableau Terraform Unstructured data

Perks/benefits: 401(k) matching Career development Flexible spending account Flex vacation Health care Insurance Team events Wellness

Region: North America
Country: United States
Job stats:  19  9  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.