Sr. Data Engineer

Chicago, IL, United States

Applications have closed

Zscaler

Zscaler is the leader in cybersecurity and zero trust digital transformation. Transform your IT and security needs with the best CASB and SASE solutions.

View company page

Company Description

Zscaler (NASDAQ: ZS) accelerates digital transformation so that customers can be more agile, efficient, resilient, and secure. The Zscaler Zero Trust Exchange is the company’s cloud-native platform that protects thousands of customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. 

With more than 10 years of experience developing, operating, and scaling the cloud, Zscaler serves thousands of enterprise customers around the world, including 450 of the Forbes Global 2000 organizations. In addition to protecting customers from damaging threats, such as ransomware and data exfiltration, it helps them slash costs, reduce complexity, and improve the user experience by eliminating stacks of latency-creating gateway appliances. 

Zscaler was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. Zscaler’s purpose-built security platform puts a company’s defenses and controls where the connections occur—the internet—so that every connection is fast and secure, no matter how or where users connect or where their applications and workloads reside

Job Description

Position:  Sr. Data Engineer
Location: Remote within United States

About the team: As part of the IT group, we are responsible for executing our enterprise data strategy which emphasizes data management maturity, fosters a robust data culture, and architects a best-in-class enterprise data platform.  We have the ultimate goal to provide trusted data and insights at scale which enable corporate and functional data-driven decision making. We are fueled by organic innovation, internal collaboration and adoption of data visualization, data management, reporting automation, AI/ML and integration tools. We leverage best practices and alignment through our Enterprise Data Community to deliver speed to insight, scale, control and enablement. The team is distributed across the United States and India and is composed of data engineers, data analysts, visualization developers, and infrastructure specialists. 

Responsibilities/What You’ll Do:

  • Collaborate with Data & Technical architects, integration and engineering teams to capture inbound/outbound data pipeline requirements, conceptualize and develop solutions.
  • Support the evaluation and implementation of the current and future data applications/technologies to support the evolving Zscaler business needs.
  • Collaborate with IT business engagement & applications engineer teams, enterprise data engineering and business data partner teams to identify data source requirements.
  • Profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating into Zscaler’s data warehouse in Snowflake. 
  • Continuously optimize existing data integrations, data models and views while developing new features and capabilities to meet our business partners needs. 
  • Work with Principal Engineer/ Data Platform Lead to design and implement data management standards and best practices. 
  • Continue to learn and develop next generation technology/ data capabilities that enhance our data engineering solutions.
  • Develop large scale and mission-critical data pipelines using modern cloud and big data architectures.

Qualifications

Qualifications/Your Background:

  • 8+ years of experience in data warehouse design & development.
  • Proficiency in building data pipelines to integrate business applications (salesforce, Netsuite, Google Analytics etc) with Snowflake
  • Strong hands-on experience in modern data stack tools like Fivetran, Matillion, DBT
  • Must have proficiency in SQL and data modeling techniques (Dimensional) – able to write structured and efficient queries on large data sets
  • Must have hands-on experience in Python to extract data from APIs, build data pipelines.
  • Solid understanding of advanced snowflake concepts like Streams, tasks, warehouse optimizations, SQL tuning/pruning
  • Familiarity with data mesh style architecture.
  • Experience in using data orchestration workflows using open-source tools Apache Airflow, Prefect is a plus
  • Experience in using data catalogue tools is a plus.
  • Knowledge of data visualization tools such as Tableau, and/or Power BI
  • Must demonstrate good analytical skills, should be detail-oriented, team-player and must have the ability to manage multiple projects simultaneously.

Additional Information

All your information will be kept confidential according to EEO guidelines.

#LI-YC2  

#LI-Remote

What You Can Expect From Us:

  • An environment where you will be working on cutting edge technologies and architectures
  • A fun, passionate and collaborative workplace
  • Competitive salary and benefits, including equity

Why Zscaler?

People who excel at Zscaler are smart, motivated and share our values. Ask yourself: Do you want to team with the best talent in the industry? Do you want to work on disruptive technology? Do you thrive in a fluid work environment? Do you appreciate a company culture that enables individual and group success and celebrates achievement? If you said yes, we’d love to talk to you about joining our award-winning team. 

Additional information about Zscaler (NASDAQ: ZS ) is available at https://www.zscaler.com

Zscaler is proud to be an equal opportunity and affirmative action employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status.
 

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow APIs Architecture Big Data Data management Data pipelines Data strategy Data visualization Data warehouse Engineering Excel FiveTran Machine Learning Matillion Open Source Pipelines Power BI Python Salesforce Security Snowflake SQL Tableau

Perks/benefits: Competitive pay Equity

Regions: Remote/Anywhere North America
Country: United States
Job stats:  21  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.