Data Engineer II (Remote, US)
Remote - United States
Full Time Mid-level / Intermediate USD 132K - 198K
Openly
Openly provides exceptional home insurance by delivering truly comprehensive coverage through local agents without surprise or concern. Discover more!
Openly is rebuilding insurance from the ground up. We are re-envisioning and enhancing every aspect of the customer experience. Doing this requires a rapidly growing team of exceptional, curious, empathetic people with a wide range of skill sets, spanning technology, data science, product, marketing, sales, service, claims handling, finance, etc.
Now is the perfect time to join the journey. Here’s why
- It’s working. We’re in multiple states and on our way to operating countrywide. We have thousands of agents selling our product and millions of dollars of annual customer premiums.
- We’re well-backed & stable. We closed our $100M Series D fundraise. We are supported by some of the top investors globally, including Google’s “Gradient” AI-focused fund, Obvious Ventures, Advance Venture Partners, Eden Global Partners, and Clocktower Technology Ventures.
- It’s not too late! Despite this traction and stability, we’re still early enough in the journey that there’s time to make a real difference during Openly’s formative period.
If you’d like to understand more about Openly’s mission, consider checking out this video (https://vimeo.com/267654520) from a company pitch we gave several years ago at Techstars.
Job Details
We’re hiring a Data Engineer II to be a part of our data engineering team to build and enhance data solutions for Openly's insurance platform. You will play a key role in how we build, manage, structure, store, and access data and data pipelines so we can provide high quality usable data to our insurance product applications and customers.
Key Responsibilities
- Design, create, and maintain data solutions. This includes data pipelines and data structures.
- Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
- Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
- Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
- Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
- Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
- Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices).
- Share your knowledge within the data engineer team and with others in the company (e.g. engineering all-hands, engineering learning hour, domain meetings, etc.).
Our stack
- Backend/Core: Go & Postgresql
- Frontend: Browser-based, VueJS, Webpack, Nuxt &, Tailwind
- Research/Data Science: R, ArcGIS, Jupyter Notebooks, & Python
- Data: GCP GCS, BigQuery, Composer/Airflow, Cloud Functions, Postgres, SQL, Python, Aiven Debezium and Kafka, Fivetran
- Infrastructure: Google Cloud, specifically Cloud Run, Kubernetes, Pub/Sub, BigQuery, and CloudSQL, managed with Terraform. We use GitHub for code hosting, DataDog and HoneyComb for monitoring, and CircleCI for running our CI/CD pipelines.
- Remote work tools: Slack, Zoom, Donut
Requirements
- 1 to 2 years of data engineering and data management experience
- Scripting skills in one or more of the following: Python
- Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management
- Experience with Google Cloud data store and data orchestration technologies and concepts
- Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms
- Understanding of modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse
- Proficiency with SQL optimization and development
- Ability to understand data architecture and modeling as it relates to business goals and objectives
- Ability to gain an understanding of data requirements, translate them into source to target data mappings, and building a working solution
- Experience with terraform preferred but not required
Compensation & Benefits:
The target salary range represents the budgeted salary range for this position. Actual compensation for this position will be determined based on the successful candidate's experience and skills. We are committed to providing a compensation package that not only reflects the responsibilities and requirements of the role, but also the unique expertise that the chosen candidate will bring to our team.
Target Salary Range$140,000—$148,500 USDThe full salary range shows the min to max salary range for this position. Actual compensation will be commensurate with experience and qualifications and determined based on various factors including the candidate's qualifications, skills, and experience.
Full Salary Range$132,000—$198,000 USDBenefits & Perks
- Remote-First Culture - We supported #remotelife long before it was a given. We'll keep promoting it.
- Competitive Salary & Equity
- Comprehensive Medical, Dental, and Vision Plan Offerings
- Life and disability coverage including voluntary options
- Competitive PTO - 20 days and 11 paid holidays (including floating holidays) per year under the Company’s vacation and holiday policies.
- Parental Leave - up to 8 weeks (320 hours) of paid parental leave based on meeting eligibility requirements
(Birthing parents may be eligible for additional leave through STD) - 401K Company Contribution - Openly contributes 3% of the employee's gross income, even if the employee does not contribute.
- Work-from-home stipend - We provide a $1,500 allowance to spend on setting up your home workplace
- Annual Professional Development Fund: Each employee has $2,000 in professional development (PD) funds to spend on activities or resources annually. We want each Openly employee to achieve personal and professional success and to feel supported, confident, and informed about improving their efficiency and productivity.
- Be Well Program - Employees receive $50 per month to use towards your overall well-being
- Paid Volunteer Service Hours
- Referral Program and Reward
Depending on position, Employees generally are eligible for cash incentive compensation, including commissions for sales eligible roles. In all cases, eligibility for compensation and benefits is subject to applicable plan and policy terms in effect from time to time.
U.S. Citizens, Green Card Holders, and those authorized to work in the U.S. for any employer and currently residing in the US will be considered.
Openly is committed to equal employment opportunity and non-discrimination for all employees and qualified applicants without regard to a person's race, color, sex, gender identity or expression, age, religion, national origin, ancestry, ethnicity, disability, veteran status, genetic information, sexual orientation, marital status, or any characteristic protected under applicable law. Openly is an E-Verify Employer in the United States. Openly will make reasonable accommodations for qualified individuals with known disabilities under applicable law.
Tags: Airflow Architecture BigQuery Business Intelligence CI/CD CX Data management Data pipelines Data warehouse Engineering Finance FiveTran GCP GitHub Google Cloud Jupyter Kafka Kubernetes Pipelines PostgreSQL Python R Research SQL Terraform Vue
Perks/benefits: 401(k) matching Career development Competitive pay Equity Health care Home office stipend Insurance Medical leave Parental leave Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Lead Data Analyst jobs
- Open Senior Business Intelligence Analyst jobs
- Open MLOps Engineer jobs
- Open Data Manager jobs
- Open Data Science Manager jobs
- Open Principal Data Engineer jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Power BI Developer jobs
- Open Product Data Analyst jobs
- Open Business Intelligence Developer jobs
- Open Data Scientist II jobs
- Open Junior Data Scientist jobs
- Open Data Analytics Engineer jobs
- Open Business Data Analyst jobs
- Open Sr. Data Scientist jobs
- Open Senior Data Architect jobs
- Open Data Analyst Intern jobs
- Open Big Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Junior Data Engineer jobs
- Open Data Quality Analyst jobs
- Open Data Product Manager jobs
- Open Principal Data Scientist jobs
- Open Azure Data Engineer jobs
- Open GCP-related jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Data visualization-related jobs
- Open Finance-related jobs
- Open Deep Learning-related jobs
- Open PhD-related jobs
- Open APIs-related jobs
- Open TensorFlow-related jobs
- Open PyTorch-related jobs
- Open NLP-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open CI/CD-related jobs
- Open LLMs-related jobs
- Open Kubernetes-related jobs
- Open Generative AI-related jobs
- Open Data governance-related jobs
- Open Hadoop-related jobs
- Open Airflow-related jobs
- Open Docker-related jobs