Senior Data Scientist

Cincinnati, Ohio

Applications have closed

Coupa Software, Inc.

See all of your business spend in one place with Coupa to make cost control, compliance and anything spend management related easier and more effective.

View company page

Coupa Software (NASDAQ: COUP), a leader in business spend management (BSM), has been certified as a “Great Place to Work” by the Great Place to Work organization. We deliver “Value as a Service” by helping our customers maximize their spend under management, achieve significant cost savings and drive profitability. Coupa provides a unified, cloud-based spend management platform that connects hundreds of organizations representing the Americas, EMEA, and APAC with millions of suppliers globally. The Coupa platform provides greater visibility into and control over how companies spend money. Customers – small, medium and large – have used the Coupa platform to bring billions of dollars in cumulative spend under management. Learn more at www.coupa.com. Read more on the Coupa Blog or follow @Coupa on Twitter.
Do you want to work for Coupa Software, the world's leading provider of cloud-based spend management solutions? We’re a company that had a successful IPO in October 2016 (NASDAQ: COUP) to fuel our innovation and growth. At Coupa, we’re building a great company that is laser focused on three core values:
1. Ensure Customer Success – Obsessive and unwavering commitment to making customers successful.2. Focus On Results – Relentless focus on delivering results through innovation and a bias for action.3. Strive For Excellence – Commitment to a collaborative environment infused with professionalism, integrity, passion, and accountability.
Coupa Software, Inc. seeks Senior Data Scientist in Cincinnati, OH. Duties: Design experiments and build machine learning models to iterate and arrive at the desired results. implementing variety of machine learning techniques, advanced statistical techniques and concepts (clustering, regression, classification, decision tree learning, neuralnetworks, etc.). Use Python Machine Learning Stack (Scikit-Learn, Pandas, SparkML). Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. Use technologies like Python, Jupyter Notebooks, Metabase, PySpark, Microsoft Excel, and S3 to manipulate data and draw insights from large data sets. Understand the usage of Big Data: extraction, processing, filtering, and presenting large data quantities (100k to millions of rows) via AWS technologies like ECR, S3. Analyze data using SQL, PySpark, Python and data pipelines. Work with product managers, other engineers to devise right metrics & measurements supporting product features or functionality. Communicate technical concepts and solutions with an emphasis on productdevelopment. Use Agile/Scrum methodologies like Jira, Confluence (Atlassian tools) for keeping scientists on track and iteratively delivering results. Collaborate and communicate effectively in cross-functional teams composed of researchers, software engineers, product managers, designers, and operations leaders. Develop and maintain a runtime environment for the machine learning models created which can be later used in the product. Work on Python web frameworks like Falcon, Waitress, Pytest. Collaborate with the design team to gather end-users requirements. Create REST APIs for easy integration in the product. Deploy the environment using tools like Jenkins, Docker. Perform tuning, usability, improvement and test automation. Write reusable codes and coding in several languages like: Python, Java, Shell Script etc. Devise and automate processes to monitor the quality of the models and results. Use one or more Cloud services like Docker, Jenkins, Metabase. Understand and implement processes in distributed data/computing tools and environments like Spark, Hadoop. Fine-tune the created algorithms based on test results. Perform ML tests using Pytest, Mock Test. Set strategic vision to overcome the complexities of an evolving machine learning ecosystem. Learn and master new technologies and techniques. Apply machine learning to real-world problems and crafting scalable and effective data solutions. Develop an experimental design approach. Keep up to date with the latest ML trends. Research and transform data science prototypes. Requires a Master’s degree in Computer Science, Data Science, or related field, plus 2 years of experience in job offered or related role. Must have at least 2 years of experience in each of the following skills:-Working with product managers and other engineers to devise right metrics & measurements supporting product features or functionality;-Collaborating and communicating effectively in cross-functional teams;-Using Agile/Scrum methodologies like Jira, Confluence (Atlassian tools), or related to keep data scientists on track and iteratively deliver results;-Engaging in strategic planning to overcome the complexities of an evolving machine learning ecosystem;-Applying machine learning to real-world problems and crafting scalable and effective data solutions;-Designing experiments and building machine learning models to iterate and arrive at the desired results;-Implementing a variety of machine learning techniques and advanced statistical techniques and concepts (clustering, regression, classification, decision tree learning, neural networks, etc.);-Developing and maintaining a runtime environment for machine learning models;-Mining and analyzing search related data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies;-Extracting, processing, filtering, and presenting large data quantities (100k to millions of rows) via AWS technologies;-Using technologies such as Python, Jupyter Notebooks, Metabase, PySpark, Microsoft Excel, and S3 to manipulate data and draw insights from large data sets;-Analyzing data using SQL, PySpark, Python and data pipelines;-Using the Python machine learning stack (Scikit-learn, Pandas, SparkML);-Working on Python web frameworks like Falcon, Waitress, Pytest;-Creating REST APIs;-Deploying the environment using tools like Jenkins, Docker;-Perform tuning, usability, improvement and test automation; and-Writing reusable codes and coding in languages such as Python, Java, Shell Script, etc. Must also have authority to work permanently in the U.S. Applicants who are interested in this position may apply at jobpostingtoday.com, reference number 38120. At Coupa, we have a strong and innovative team dedicated to improving the spend management processes of today’s dynamic businesses. It’s our people who make it happen, and we strive to attract and retain the best in every discipline.
We take care of our employees every way we can, with competitive compensation packages, as well as restricted stock units, an Employee Stock Purchase Program (ESPP), comprehensive health benefits for employees and their families, retirement and savings plans with employer match, a flexible work environment, no limit vacations for exempt employees, non-exempt employees are on an accrual basis for PTO, catered lunches…And much more!
As part of our dedication to the diversity of our workforce, Coupa is committed to Equal Employment Opportunity without regard for race, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity or religion.
Please be advised, inquiries or resumes from recruiters will not be accepted.

Tags: Agile APIs AWS Big Data Classification Computer Science Data pipelines Docker Excel Hadoop Jira Jupyter Machine Learning Metabase ML models Pandas Pipelines PySpark Python Research Scikit-learn Scrum Spark SparkML SQL

Perks/benefits: Career development Competitive pay Flex hours Flex vacation Health care

Region: North America
Country: United States
Job stats:  8  0  0
Category: Data Science Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.