Staff Data Engineer (Remote)

USA (Remote)

Applications have closed
Copper CRM logo

Copper CRM

Win clients for life with Copper CRM software solutions. Give us a try and see how we can help your business build stronger customer relationships.
Copper is not just another CRM. It’s the only CRM that is 100% focused on helping clients build the strongest possible business relationships that will win them customers for life. While most CRMs are glorified databases with legions of custom fields, Copper takes a human and action-centered approach and functions at the heart of a business. Read more here about our CEO’s Vision for the future of CRM
Copper surprises people: clients tell us they actually love their CRM, as this was previously a feat of impossibility. We’re turning CRM on its head by offering a beautifully crafted, Google Workspace-native tool that offers transparency, collaboration, and productivity.
It’s an exciting time to be part of this category, where there are few individual players left who are truly capable of capturing significant market share. We are one of them with a strong foothold in the space, and funds raised of $100 million.
We are looking for a Data Engineer, with Staff/Architect level experience, to join our growing team. Our system relies on a series of data pipelines for reporting directly in the product. This role impacts our users and our business. You will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams and our customers.

What you’ll do …

  • Design, create, evolve and maintain data pipelines, develop dashboards, manage ETL/ELT activities supporting workloads including ad-hoc analysis, visualizations, data governance etc
  • Enable experimental and ad-hoc data-driven models and visualizations for rapid feasibility and product planning
  • Identify, design and implement internal process improvements: automating manual processes, improving data delivery and re-designing infrastructure for greater scale
  • Help us create product capabilities that fundamentally rely on data models
  • Work alongside Product and Engineering teams on new reporting, data requirements
  • Design, develop and manage model deployment and model monitoring framework
  • Participate in agile delivery process; own, research and recommend new solutions
  • Effectively communicate and work with stakeholders to assist with data-related technical issues and support their data infrastructure needs

What you'll have ...

  • 8+ Years experience in a Data Engineer role
  • Solid Experience with ELT/ETL tools, MPP Databases (RedShift, Snowflake), Visualization Platform (Looker,Tableau, Good data)
  • Experience building data pipelines for both structured and unstructured data sources
  • Advanced working SQL knowledge and experience working with relational databases (PostgreSQL, MySQL) and NoSQL databases (dynamoDB, MongoDB)
  • Familiarity with DevOps type activities such as design for manageability and root cause analysis
  • Ability to develop processes supporting data transformation, data structures and metadata to facilitate other team members successfully using data
  • Experience supporting and working with cross functional teams in a dynamic environment
  • Can work independently and thrive with autonomy
  • A team player with excellent problem solving and analytical skills
  • Experience with Linux and scripting languages like shell, perl, python
  • Familiarity with cloud technologies (AWS,  GCP)
  • Experience with data security and compliance of Data Protection Regulations (e.g. GDPR)
Our teams are located in the UK, Canada and United States. We are remote first, and we are an equal-opportunity employer.
At Copper we are committed to building and empowering a diverse and inclusive environment. We believe that diverse teams are the strongest teams, so we encourage people from all backgrounds to apply.
If this opportunity sounds interesting, apply today!  We would like to hear from you.

* Salary range is an estimate based on our salary survey at

Tags: Agile AWS Data pipelines DevOps DynamoDB Engineering ETL GCP Linux Looker Model deployment MongoDB MPP MySQL NoSQL Perl Pipelines PostgreSQL Python Redshift Research Security Snowflake SQL Tableau Unstructured data

Perks/benefits: Team events Transparency

Regions: Remote/Anywhere North America
Country: United States
Job stats:  5  2  0

Other jobs like this

Explore more AI/ML/Data Science career opportunities

Find open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general, filtered by job title or popular skill, toolset and products used.