Data Engineer - Toronto Hub

Canada - Toronto

Veeva Systems logo
Veeva Systems
Apply now Apply later

Posted 1 week ago

Our engineering and product teams are organized around our hubs for community and collaboration. Work anywhere means you can work at home or the office on any given day. Your product hub is based on the primary location of your product.  You should live within one timezone of your product hub. Our current product hubs are Pleasanton, Columbus, Boston, NYC, Raleigh, and Toronto. 
Veeva is looking for a data engineer to create ETL pipelines for our Veeva Data Cloud product. We’re building a system to provide our customers with access to billions of records a day with insightful analysis along with aggregations and transformations.
For this role, we need someone who can design flexible data processes and leverage their Python and Scala skillsets to implement them in an AWS cloud environment.
You’ll be responsible for creating and owning the implementation of numerous data analysis features as well as the pipelines that process those features in a multi-tenant, highly parallel system.

What You'll Do

  • Design and build scripts and tools that perform data analysis, transformations, aggregations, and other augmentations on large sets of in a spark-based AWS environment (EMR, Glue, S3, Redshift, Athena)
  • Evaluate various pipeline models, tools, and environments and implement these to push data from our sources through your transformations and finally to our customers
  • Work with product management and data research teams to prototype and test new ideas then take those to production
  • Work in a fast-paced, test-driven environment

Requirements

  • BS degree in Computer Science, Engineering or related subject
  • 3 years+ experience working on Apache Spark applications in either Python (PySpark) and/or Scala
  • Experience creating spark jobs that work on at least 1 billion records.
  • Intermediate or greater SQL knowledge
  • Experience creating data pipelines in a production system
  • Experience working on AWS environments (S3, EMR, Glue, Redshift)

Nice to Have

  • Experience working with Data Quality techniques
  • Java development experience
  • Experience working with Machine Learning/AI models
  • Experience with AWS glue
  • Familiarity with agile methodologies
  • Experience with the following tools: Jira, Git, Terraform

Perks & Benefits

  • Allocations for continuous learning & development
  • Annual budget to donate to the non-profit of your choice
  • Health & wellness programs
#LI-Remote
Veeva builds enterprise cloud technology that powers the biggest names in the pharmaceutical, biotech, consumer goods, chemical & cosmetics industries. Our customers make vaccines, life-saving medicines, and life-enhancing products that make a difference in everyday lives. Our technology has transformed these industries; enabling them to get critical products and services to market faster. Our core values, Do the Right Thing, Customer Success, Employee Success, and Speed, guide us as we make our customers more efficient and effective in everything they do.  
Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Job tags: AI AWS Engineering ETL Java Machine Learning PySpark Python Redshift Research Scala Spark SQL