Senior Data Engineer
US - Remote
Emburse
Focus on what matters more with help from Emburse's trusted expense management, AP automation, and payment solutions. Optimize expense management, AP automation, and global B2B payment solutions for your business or organization with Emburse.Emburse has offices across North America, including Los Angeles, Montreal, Portland (ME), San Diego, San Francisco, and Toronto, as well as locations in the UK, Germany, Spain and Australia.
Our core values - Sincerity, Empathy, Empowerment, Individuality and Teamwork - reflect who we are as a company. They are central to the decisions we make and the interactions we have with our teams, customers, and partners. As a people focused company, we are seeking candidates who align with our values.
Emburse is a proud recipient of a 2020 Tech Cares Award from TrustRadius. This award celebrates companies that have gone above and beyond to provide their communities, clients, and front line workers with support during the COVID-19 pandemic. We are a people-first company, and this award is a testament to our mission to humanize work.
Follow us to stay updated on news and job openings!
Emburse data engineers develop the data pipelines and systems in the central platform empowering Emburse’s SaaS products. As a data engineer, you will build the pipelines that populate the data warehouse and data lakes, implement tenant data security, support the data science platforms and techniques, and integrate data solutions and APIs with Emburse products and analytics. Emburse, known for its innovation and award-winning technologies employ modern technologies including Snowflake, Data Bricks/Spark, AWS and Looker. In this role you will have access to the best and brightest minds in our industry to grow your experience and career within Emburse.
What you'll do:
- 6+ years of data engineering experience related to data acquisition, data pipeline, workflows, data systems and architectures, preferably with Project Lead experience in a product-focused environment.
- Develops code (e.g. python), infrastructure and tools for the extraction, transformation, and loading of data from a wide variety of data sources, using SQL, streaming and related data lake and data warehouse technologies.
- Builds analytical tools to utilize, model and visualize data
- Assembles large, complex data sets for the needs business for ad-hoc requests and as part of on-going software engineering projects
- Develops scripts to automate manual processes, address data quality, enable integration or monitor processes
- Has a knowledge of message queuing, stream processing, and scalable data stores
- Understands data security to a high degree as applicable to multi-tenant environments, multiple regions and financial industry data
- Takes personal responsibility for quality and maintainability of the product and actively identifies areas for improvement
- Identifies problems/risks of own work and others.
- Identifies viable alternative solutions and presents them
- Follows SDLC processes, including adopting agile-based processes, peer code-reviews, and technical preparations required for scheduled releases and demos.
- Partners with product, analytics and data science team to drive requirements that take into account all parties' needs
- Establishes monitoring, responds to alerts and resolves issues within data pipelines
- Develops sophisticated data-oriented software or systems with minimum supervision
- On-boards and mentors less experienced team members
- Makes complex contributions to technical documentation/knowledge base/data directionaries and team/engineering presentations
- Strong ability to mentor and guide less experienced engineers.
- May have role in supervising others or leading small to moderate projects
- Optimizes processes, fixes complex bugs and demonstrates advanced debugging skills
- Produces quality documentation and ensures practices are followed
- Contributes to system design sessions in their area of specialty
- Expanded Code review responsibilities
- Performs advanced refactoring
- Collaborate with product owners, software developers, data scientists, devops and analysts
- Gives constructive feedback to team members
- Learn industry jargon and business concepts to better understand the challenges our technology is designed to solve.
- Raises roadblocks and updates estimations as needed
- Ability to communicate complicated concepts to junior staff.
What we're looking for:
- Bachelor’s degree in Computer Science or related field, or equivalent years’ experience
- Advanced working SQL knowledge and experience working with a variety of relational databases
- Experience working with a modern scalable data lake or data warehouses
- Experience working with a modern data pipeline or data workflow management tool
- Experience working in a product-oriented environment alongside software engineers and product managers
- Experience with Python in a full SDLC/production deployment environment
- Preferred: Experience with AWS services, Experience working with Snowflake, Experience working with Looker or an equivalent Business Intelligence suite, Experience working with Fivetran or an equivalent ETL/ELT suite, Experience with Databricks or an equivalent Spark-based suite, Financial Industry experience preferred
Tags: Agile APIs AWS Business Intelligence Computer Science Databricks Data pipelines DevOps ELT Engineering ETL FiveTran Looker Pipelines Python RDBMS SDLC Security Snowflake Spark SQL Streaming
Perks/benefits: Career development Startup environment
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open AI Engineer jobs
- Open Data Science Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open MLOps Engineer jobs
- Open Data Manager jobs
- Open Data Engineer II jobs
- Open Power BI Developer jobs
- Open Sr Data Engineer jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Business Intelligence Developer jobs
- Open Data Scientist II jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Business Data Analyst jobs
- Open Sr. Data Scientist jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Manager, Data Engineering jobs
- Open Azure Data Engineer jobs
- Open Junior Data Engineer jobs
- Open Data Product Manager jobs
- Open Data Quality Analyst jobs
- Open Principal Data Scientist jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open GCP-related jobs
- Open Data management-related jobs
- Open Java-related jobs
- Open Privacy-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open Kubernetes-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open LLMs-related jobs
- Open Airflow-related jobs
- Open Data warehouse-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs