Senior Data Engineer
San Francisco, CA or Austin, TX
About the position
We are looking for a Senior Data Engineer superhero who is passionate about building next generation Analytics infrastructures and products. You will work directly with our data infrastructure, datasets and analytics tools that are used by Product, Engineering, Marketing, Sales, Finance and Customer Success teams every day.
You will contribute to a variety of exciting projects that range from designing robust and fully automated data pipelines to building ML-based applications for improving company-wide productivity and the ThousandEyes product. You have a passion for designing, implementing and operating stable, scalable and efficient solutions that move and analyze large data volumes quickly. You are also a self-starter who is comfortable with ambiguity, pays close attention to detail and has the ability to work in a fast-paced environment.
In the Analytics team we are excited about the impact data can have in our business, product and our customers’ experience – we hope you share our enthusiasm!
- BS/MS with quantitative focus (e.g. Economics, Computer Science, Mathematics, Physics, Statistics) or equivalent practical experience
- 5+ years of experience in data engineering, software engineering, or other related roles
- 3+ years experience operating databases (e.g. Redshift, MySQL, MongoDB) and advanced query authoring & tuning
- 3+ years of dimensional data modeling & schema design in data warehouses
- Expertise in one or more programming languages (ideally Scala, Java, or Python)
- Experience designing, implementing and maintaining production grade data pipelines
- Knowledge of cloud-based production grade data architectures (e.g. AWS EC2, EMR, Glue, S3, Redshift)
- An eye for automation and instrumentation in all data-related aspects
- Work experience in an interdisciplinary / cross-functional field
- Working experience in SaaS companies
- Experience developing and operating large scale big data technologies (e.g. Apache Spark, Presto, HDFS, Apache Kafka, Apache Druid)
- Experience with ETL tooling (e.g. Airflow)
- Experience performing quantitative analysis and using data visualization tools to deliver dashboards at scale
- Strong cross-functional and interpersonal skills with demonstrated ability to communicate technical content to general audiences
- Entrepreneurial in nature - able to keep moving initiatives forward in ambiguous situations