Snowflake started with a clear vision: develop a cloud data platform that is effective, affordable, and accessible to all data users. Snowflake developed an innovative new product with a built-for-the-cloud architecture that combines the power of data warehousing, the flexibility of big data platforms, and the elasticity of the cloud at a fraction of the cost of traditional solutions. We are now a global, world-class organization with offices in more than a dozen countries and serving many more.
We’re looking for a strong Data Engineer to build state of the art Data pipelines for snowflake . In this role, you will work closely with many cross-functional teams to build a data pipeline to ingest data into our internal Snowflake environment . This is a strategic, high-impact role that will also help shape the future of Snowflake products and services.
- Create and maintain optimal data pipeline architecture.
- Manage and support the data integrity and reliability of data services.
- Foster collaboration among engineering teams, IT & other business groups to ensure data access is secure & are audit-able.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Maintain highest levels of development practices including: technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability.
WHAT YOU WILL NEED:
- 2+ years of experience in Data warehousing, data modeling, Python and SQL .
- 1+ years of experience in working on public cloud (AWS, Azure or GCP)
- 1+ years of experience in MPP or Cloud data warehouse solutions like Snowflake, Redshift, BigQuery or Teradata
- Experience in ELT / ETL based data pipeline build outs is useful.
- Strong communication and cross functional collaboration skill
- B.S or MS in Computer Science or equivalent practical experience.
- 2 + years experience in building data pipelines using Python/Java & SQL
- Understanding of Big Data technologies and solutions (Spark, Hadoop, Hive, MapReduce) and multiple scripting and languages (Python, Yaml).
Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake.
How do you want to make your impact?
Snowflake is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, gender identity or expression, marital status, national origin, disability, protected veteran status, race, religion, pregnancy, sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.