Data Engineer, Patterns & Practices
Chicago
Applications have closed
Tempus
Tempus has built the world’s largest library of clinical & molecular data and an operating system to make that data accessible and useful, starting with cancer.Passionate about precision medicine and advancing the healthcare industry?
Recent advancements in underlying technology have finally made it possible for AI to impact clinical care in a meaningful way. Tempus' proprietary platform connects an entire ecosystem of real-world evidence to deliver real-time, actionable insights to physicians, providing critical information about the right treatments for the right patients, at the right time.
Passionate about building great software products?
At Tempus, software products are owned and developed by small, autonomous teams composed of developers, designers, scientists, and product managers. You and your team set the goals, build the software, deploy the code, and contribute to a growing software platform that will make a lasting impact in the field of cancer research and treatment
Tempus builds software as nimble as our teams. Our modern tech stack - containerized applications running on GCP managed services - allows our teams to iterate rapidly and lead our industry in innovation. Emphasis on automation coupled with our decentralized, microservice architecture allows us to deliver advanced solutions with confidence at scale.
Why we’re looking for you:
- You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise data warehouse.
- You have implemented analytics applications using database technologies such as relational or multidimensional (OLAP).
- You value the importance of defining and enforcing data contracts with experience in writing specifications.
- You write code to transform data between data models and formats, preferably in Python.
- You've worked in agile environments and are comfortable iterating quickly.
Bonus points for:
- Experience with one of the many infrastructure-as-code tools such as Terraform (our favorite), Kubernetes, CloudFormation, Docker, Ansible, Salt, Packer, Puppet, Chef, or similar
- Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
- Experience building cloud-native applications and supporting technologies / patterns / practices including: Google Dataflow, Google BigQuery, Google Cloud Composer, Google Pub/Sub, Google Cloud Storage, Docker, and CI/CD
Responsibilities for the Position:
- Works as an embedded member of an engineering team
- Be a hands-on contributor to the implementation of our data lake and data pipelines
- Work with data producers and data scientists to understand our data processing needs.
- Use your experience, technical knowledge, and creativity to simplify development and infrastructure provisioning workflows
- Configures and deploys cloud infrastructure using Terraform and CI tools like Concourse
- Proactively and continuously learn about new and relevant technologies
- Use your knowledge to influence other developers and advocate for best practices
- Implements dashboards, monitoring, and alerting for team services
- Support your users either in-person or via Slack
- Document usage and query patterns for our data platform
Tags: Agile Ansible BigQuery CI/CD Dataflow Data pipelines Docker ELT Engineering ETL GCP Google Cloud Kubernetes OLAP Pipelines Python Research SQL Terraform
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Marketing Data Analyst jobs
- Open MLOps Engineer jobs
- Open AI Engineer jobs
- Open Data Engineer II jobs
- Open Junior Data Scientist jobs
- Open Senior Data Architect jobs
- Open Sr Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Power BI Developer jobs
- Open Principal Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Product Data Analyst jobs
- Open Business Data Analyst jobs
- Open Data Manager jobs
- Open Data Quality Analyst jobs
- Open Sr. Data Scientist jobs
- Open Data Scientist II jobs
- Open Big Data Engineer jobs
- Open Business Intelligence Developer jobs
- Open Data Analyst Intern jobs
- Open Principal Data Scientist jobs
- Open ETL Developer jobs
- Open Azure Data Engineer jobs
- Open Data Product Manager jobs
- Open Business Intelligence-related jobs
- Open Data quality-related jobs
- Open Privacy-related jobs
- Open Data management-related jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open Deep Learning-related jobs
- Open APIs-related jobs
- Open PyTorch-related jobs
- Open PhD-related jobs
- Open TensorFlow-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Data warehouse-related jobs
- Open Airflow-related jobs
- Open Databricks-related jobs
- Open Hadoop-related jobs
- Open LLMs-related jobs
- Open DevOps-related jobs
- Open CI/CD-related jobs