Data Engineer - Toronto Hub

Canada - Toronto

Veeva Systems logo
Veeva Systems
Apply now Apply later

Posted 1 month ago

Veeva develops cloud software that helps the world’s largest pharmaceutical companies and emerging biotechs bring critical medicine and therapies to the patients that need them. Our enterprise product suite is ubiquitous in the life sciences industry. 
Veeva is a ‘Work Anywhere’ company, so you can connect with teams in our Toronto office while also having the flexibility to work from home. And as a Public Benefit Corporation, you will work for a company with purpose and focused on making a positive impact on society.
Veeva is looking for a data engineer to create ETL pipelines for our Veeva Data Cloud product. We’re building a system to provide our customers with access to billions of records a day with insightful analysis along with aggregations and transformations.
For this role, we need someone who can design flexible data processes and leverage their Python and Scala skillsets to implement them in an AWS cloud environment.
You’ll be responsible for creating and owning the implementation of numerous data analysis features as well as the pipelines that process those features in a multi-tenant, highly parallel system.

What You'll Do

  • Design and build scripts and tools that perform data analysis, transformations, aggregations, and other augmentations on large sets of in a spark-based AWS environment (EMR, Glue, S3, Redshift, Athena)
  • Evaluate various pipeline models, tools, and environments and implement these to push data from our sources through your transformations and finally to our customers
  • Work with product management and data research teams to prototype and test new ideas then take those to production
  • Work in a fast-paced, test-driven environment


  • BS degree in Computer Science, Engineering or related subject
  • 3 years+ experience working on Apache Spark applications in either Python (PySpark) and/or Scala
  • Experience creating spark jobs that work on at least 1 billion records.
  • Intermediate or greater SQL knowledge
  • Experience creating data pipelines in a production system
  • Experience working on AWS environments (S3, EMR, Glue, Redshift)

Nice to Have

  • Experience working with Data Quality techniques
  • Java development experience
  • Experience working with Machine Learning/AI models
  • Experience with AWS glue
  • Familiarity with agile methodologies
  • Experience with the following tools: Jira, Git, Terraform

Perks & Benefits

  • Allocations for continuous learning & development
  • Annual budget to donate to the non-profit of your choice
  • Health & wellness programs
Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Veeva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances.
Job tags: AI AWS Data pipelines Engineering ETL Java Machine Learning PySpark Python Redshift Research Scala Spark SQL
Job region(s): North America
Job stats:  10  1  0
  • Share this job via
  • or