Big Data Engineer

Ann Arbor, Michigan

Applications have closed

Coupa Software, Inc.

See all of your business spend in one place with Coupa to make cost control, compliance and anything spend management related easier and more effective.

View company page

Coupa Software (NASDAQ: COUP), a leader in business spend management (BSM), has been certified as a “Great Place to Work” by the Great Place to Work organization. We deliver “Value as a Service” by helping our customers maximize their spend under management, achieve significant cost savings and drive profitability. Coupa provides a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The Coupa platform provides greater visibility into and control over how companies spend money. Customers – small, medium and large – have used the Coupa platform to bring billions of dollars in cumulative spend under management. Learn more at www.coupa.com. Read more on the Coupa Blog or follow @Coupa on Twitter.
Do you want to work for Coupa Software, the world's leading provider of cloud-based spend management solutions? We’re a company that had a successful IPO in October 2016 (NASDAQ: COUP) to fuel our innovation and growth. At Coupa, we’re building a great company that is laser focused on three core values:
1. Ensure Customer Success – Obsessive and unwavering commitment to making customers successful.2. Focus On Results – Relentless focus on delivering results through innovation and a bias for action.3. Strive For Excellence – Commitment to a collaborative environment infused with professionalism, integrity, passion, and accountability.
Coupa Software is looking for an experienced Big Data Engineer to join our amazing team and be part of our next-gen supply chain cloud platform. As a Big Data engineer, you will work as part of the Data Architecture team and interact closely with other data engineers, DBA’s, Software Development, Architecture, DevOps, and Platform Success to build out our next-generation data architecture for our cloud platform in AWS and Azure. You will be directly involved in the execution of tactical and strategic data-related projects. You will develop data infrastructure and data pipelines to support data ingestion, enrichment, analytics, and visualization uniting data across disparate sources and scales, from small ad hoc datasets to cloud scale datasets. You will have the opportunity to learn, grow and prosper with the team along with maintaining the work-life balance and highest work ethics.

Responsibilities:

  • Design, maintain and normalize multi-tenant enterprise data warehouse
  • Ensure our data architecture supports the requirements of the business
  • Help improve the volume, velocity and veracity of our data
  • Implement cloud-based ETL/ELT architecture for ingesting high-volume data
  • Enabling bi-directional data pipelines using API’s and cloud-based tools
  • Implement best practices for data collection, partitioning and performance
  • Ability to produce aggregated datasets for efficient cross-business analysis
  • Mapping and designing ingestion and transformation from various data sources
  • Implement data modeling, security and data management best practices

Skills:

  • Bachelor’s in Computer Science, Software Engineering or related technical field
  • 3+ years of experience in Big Data projects using Hadoop/Spark or similar technology
  • Understanding of data warehousing and data lake design patterns
  • 3+ years of data engineering/development/integration experience
  • Build ETL pipelines with Kafka, Spark and RDBMS (MSSQL preferred)
  • Experience with open source workflow tools like Apache Airflow, Apache Nifi, MLflow and Apache Spark
  • Vendors integration with Databricks, Qubole, AWS Glue or Azure Data Factory
  • Fluent in SQL and at least one scripting language (Python, Java or Scala) 
  • Cloud computing on AWS, Azure and GCP 
  • Should have experience in leading teams and collaborating with cross-functional teams
  • Ability to work with geographically distributed teams
  • Strong oral and written communication
At Coupa, we have a strong and innovative team dedicated to improving the spend management processes of today’s dynamic businesses. It’s our people who make it happen, and we strive to attract and retain the best in every discipline.
We take care of our employees every way we can, with competitive compensation packages, as well as restricted stock units, an Employee Stock Purchase Program (ESPP), comprehensive health benefits for employees and their families, retirement and savings plans with employer match, a flexible work environment, no limit vacations for exempt employees, non-exempt employees are on an accrual basis for PTO, catered lunches…And much more!
As part of our dedication to the diversity of our workforce, Coupa is committed to Equal Employment Opportunity without regard for race, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity or religion.
Please be advised, inquiries or resumes from recruiters will not be accepted.

Tags: Airflow APIs AWS Azure Big Data Computer Science Databricks Data management Data pipelines Data Warehousing DevOps ELT Engineering ETL GCP Hadoop Kafka MLFlow MS SQL Open Source Pipelines Python RDBMS Scala Security Spark SQL

Perks/benefits: Career development Competitive pay Flex vacation Health care Startup environment

Region: North America
Country: United States
Job stats:  4  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.