Senior Data Engineer
Phoenix, Arizona
Applications have closed
At Virtuous, we are committed to helping charities reimagine generosity. We believe that charitable giving is about personal connections, not sales transactions. Generosity is driven by our passions and relationships – and givers want to feel like they are part of a movement bigger than themselves. We are the Generosity Operating System at the heart of charity. We are the Donor Management System that is putting the joy back in fundraising.
Position Summary
Virtuous is looking for an experienced, highly motivated Senior Data Engineer to join our burgeoning Data Operations Team. The position will report to the Director of Data Operations. The ideal candidate will have direct experience building data pipelines and architecting data lakes & data warehouses supporting key business functions and promoting data visibility & insights across all teams.
This position should excite someone who is ready to take ownership of all aspects of data warehousing and, alongside the Director of Data Operations, provide long-term strategic direction of Virtuous’s data operations and reporting capabilities. For a candidate to be successful, they will need to be able to manage and translate terabytes of complex structured and unstructured data into actionable business metrics, enjoy collaborating with others, and demonstrate a passion for the work Virtuous is doing and its mission.
Candidates willing to commute and work out of our downtown Phoenix, AZ office are preferred, though we are accepting resumes for candidates working remote from other states.
Responsibilities
- Own, design, deploy and optimize all aspects of data pipelines, data lakes, data warehouses and data marts.
- Translate complex business concepts & reporting needs into data warehousing models that enable a self-service BI reporting structure for all Virtuous Teams
- Publish and maintain documentation, data dictionaries, and best practices for consumption by the layperson
- Optimize ETL & reporting processes and capabilities with an emphasis on security, accuracy, and extensibility while minimizing latency
- Implement automated data validation / QA processes to ensure 100% accuracy in reporting outputs and foster trust across all teams
Requirements
- 5+ years of direct experience building data pipelines and architecting data lakes / warehouses or related fields
- Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
- Expert in writing and optimizing SQL
- Strong written, verbal and interpersonal communication skills with an ability to communicate key insights from complex analyses in summarized business terms
- Experience assembling terabytes of complex datasets that meet non-functional and functional business requirements
- Ability to Identify, design, and implement internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Experience with agile development, sprint planning, and estimating story points
- Startup environment/SaaS experience preferred
- Experience with Power BI, Tableau or similar BI tools
- Independent, self-starter who thrives in a fast pace environment
- Hybrid schedule for local employees within Arizona (3 days in office, 2 from home) - Work from home for employees outside of Arizona - 401k with match- Unlimited PTO- Paid volunteer time - Medical/Dental/Vision BenefitsDependents are also eligible for coverage- HSA/FSA offerings- OneMedical, Talkspace, & Teladoc Memberships- Fun company outings and events
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Big Data Data pipelines Data Warehousing ETL Pipelines Power BI Python R Security Spark SQL Tableau Unstructured data
Perks/benefits: 401(k) matching Health care Startup environment Team events Unlimited paid time off
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open MLOps Engineer jobs
- Open Lead Data Analyst jobs
- Open AI Engineer jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Principal Data Engineer jobs
- Open Data Manager jobs
- Open Power BI Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Data Scientist II jobs
- Open Business Intelligence Developer jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Data Quality Analyst jobs
- Open Big Data Engineer jobs
- Open Business Data Analyst jobs
- Open Data Analyst Intern jobs
- Open ETL Developer jobs
- Open Principal Data Scientist jobs
- Open Research Scientist jobs
- Open Data Product Manager jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open Privacy-related jobs
- Open Data management-related jobs
- Open ML models-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open Snowflake-related jobs
- Open PhD-related jobs
- Open NLP-related jobs
- Open CI/CD-related jobs
- Open Data governance-related jobs
- Open Kubernetes-related jobs
- Open Databricks-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open LLMs-related jobs
- Open Generative AI-related jobs