Data Engineer (Mid Level)

California

Pardon Ventures

Pardon is a modern family office. As a team of artists, entrepreneurs, and creatives, we build and fund ventures that spark inspiration, elevate lifestyle, and foster community.

View company page

About OptimismOptimism is a digital publisher working to build a brighter web. We conceive, launch, andoperate high-quality digital brands that spark curiosity, spread positivity, and improve thelives of our readers. With an email-first approach, our hope is to transform the inbox into ahealthy alternative to social media feeds, a place where you can curate the news, information,and entertainment you truly want.
Our brands populate a variety of categories: Lifestyle, Games, Wonder, and Travel, amongothers. This distributed approach helps us reach 3 million subscribers across our network andserve more than 30 million web sessions each month. And we’re growing with each newbrand.
Optimism Data Engineer OverviewData is at the heart of everything we do. As a Data Engineer at Optimism, you will beresponsible for the entire data lifecycle. You will work within an engineering team, and reportdirectly to both the Principal Engineer and Head of Engineering.

ACCOUNTABILITIES

  • Handling data delivery by writing/maintaining serverless Go applications
  • Working with engineering to update/extend the API stack for data delivery
  • Maintaining our existing extract/transform/load (ETL) pipelines written with Scio (Apache Beam + Google Cloud Dataflow + Scala)
  • Building new data pipelines by working with others in the engineering and business insights teams, i.e. ingesting new data sources, bringing real-time data from the APIs into BigQuery, or transforming existing data sources into new structures and tables
  • Processing data from disparate sources, joining into highly available, standardized structures
  • Leveraging the data stack to handle requests from stakeholders/business insights/revenue operations
  • Using CI tooling to manage software and pipeline deployments
  • Using Google Cloud Platform (GCP) logs to monitor data delivery, troubleshoot behavior, and understand application history over time
  • Continually improve data quality, often by working with stakeholders /business insights/revenue operations to understand what they need from the data, i.e. filling in the gaps of a bigger picture or finding ways to make irregular data regular

QUALIFICATIONS

  • Expert in SQL and at least one programming language
  • Experience with Big Query or equivalent
  • Experience with event-driven architecture and data pipelines
  • Experience in the cloud, we use both AWS and GCP
  • Experience with Git
  • Experience working remotely
  • Enjoy independence and working asynchronously

You will hit the ground running if you have...

  • Experience writing software in functional programming languages like Scala
  • Experience with Google Cloud Dataflow / Apache Beam
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: APIs Architecture AWS BigQuery Dataflow Data pipelines Data quality Engineering ETL GCP Git Google Cloud Pipelines Scala Spark SQL

Region: North America
Country: United States
Job stats:  3  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.