Senior Data Operations Engineer - Toronto Hub

Canada - Toronto

Full Time Senior-level / Expert
Veeva Systems logo
Veeva Systems
Apply now Apply later

Posted 1 month ago

Veeva develops cloud software that helps the world’s largest pharmaceutical companies and emerging biotechs bring critical medicine and therapies to the patients that need them. Our enterprise product suite is ubiquitous in the life sciences industry. 
Veeva is a ‘Work Anywhere’ company, so you can connect with teams in our Toronto office while also having the flexibility to work from home. And as a Public Benefit Corporation, you will work for a company with purpose and focused on making a positive impact on society.
As a Senior Data Operations Engineer on the Veeva Data Cloud team, you will support the architecture design and implementation of a large-scale data pipeline. You’ll primarily focus on designing and building systems that manage data collection, storage, and processing in a productized way to bring enterprise solutions to market.

What You'll Do

  • Design, build, scale, and evolve data pipelines and platform, including implementing ETL/ELT pipelines, from the ground up
  • Integrating Big Data tools and frameworks to support the product
  • Build logic throughout data products, from transformations and normalizations to data science methodologies
  • Enable supporting teams through providing data in a productized approach, such as data science, services, QA, and more
  • Collaborate with the Product Management team to identify opportunities to continue to improve scalability, monitoring, accuracy, and delivery
  • Develop Alarms and metrics to monitor software running in the production environment


  • 10+ years of hands-on data engineering experience
  • 5+ years working in an Agile development environment
  • 5+ years working experience with AWS and services such as EMR, AWS Glue, S3, Aurora RDS, DMS
  • Proven experience designing and building Big Data solutions in the cloud (tens of billions of records)
  • Proven experience designing and building large scale Data Warehouse solutions using AWS Redshift
  • Extensive hands-on experience working with Python, Pyspark, Shell Scripting, and SQL language.
  • Proficient in ETL Frameworks (e.g AWS EMR, AWS Glue, Hadoop, Hive)
  • Strong understanding of data design techniques and principles
  • Proficient in NoSQL and  relational databases
  • Experience with application monitoring tools, such as DataDog, Splunk, SignalFx.
  • Strong attention to detail and focus on delivering a high-quality product

Learn More

Nice to Have

  • Proficient in Java or Scala
  • Experience with PySpark
  • Proficient in ElasticSearch
  • Experience in System Integration
  • Experience in creating and testing synthetic datasets
  • Experience with terraform
Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Veeva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances.
Job tags: AWS Big Data Data pipelines Engineering ETL Hadoop Java NoSQL PySpark Python Redshift Scala Splunk SQL
Job region(s): North America
Job stats:  5  0  0
  • Share this job via
  • or