Data Scientist CPG Domain (4-8 yrs) for Project: Statistical Evaluation of Product Innovation Tests (100% remote)

Remote, US

Applications have closed

Mission Field

Innovative, our talented people make the difference, recipient of Inc 5000 fastest growing company award

View company page

Mission Field seeks a hands-on Data Scientist/Statistician to skillfully apply mathematics, statistical analysis, machine learning, prediction, and other data science disciplines and technologies to meet the objectives below using our actual transactional data from 15 controlled tests of various consumer products in brick and mortar retailers.  Our data is clean and readily accessible in a cloud database.  We are looking for a self-directed, experienced individual who is able to determine what methodology is required to meet the project objectives.  Previous modeling experience is required.  Experience with consumer goods sales data is required.

 

About Mission Field

 

Mission Field (www.mission-field.com) is an innovation consultancy that helps Fortune 500 consumer packaged goods (“CPG”) clients create, develop and test new consumer products in an entrepreneurial way. Based in Denver Colorado and comprised entirely of senior level talent, the Mission Field consultant team has a mix of classic consumer packaged goods expertise as well as the entrepreneurial experience needed to make disruptive product launches succeed.

 

Contracting Process

  • To apply submit your resume and hourly billing rate
  • This is an independent, 6 to 12 week, contract assignment, with an opportunity for annual analysis updates.  Desired completion of the project is early to mid-March, but open to a phased delivery schedule if more time is necessary for optimum results at a competitive price.
  • You will be required to submit a W9 and to sign both non-disclosure and contractor agreements. 
  • You possess and provide your own computer hardware and statistical software programs necessary to successfully complete this research project

Project Concept and Business Objectives

 

Project objective is to apply a statistical lens to actual transactional data from dozens of discrete in-market experiments testing over 75 distinct product innovations including sale data from multiple store locations, sales data from many distinct categories worth of competitive products, and promotional activity that helped to drive the sales of each test. The project goal is to analyze all of that data and help Mission Field A) improve its testing models and then B) develop models for predicting product success in future experiments. The challenge is that the historical data will link to some variances in test design, so there is a need to find overarching insights and opportunities wherever the commonalities lie across the distinct tests.

 

  1. Confirm / Modify experiment design model – how many store locations (sample) for how long (number of weeks) provides solid statistical power and reliability/repeatability of results?
    1. Model requirements may be different depending on sales of the item being tested (by range of High/Medium/Low average unit sales) or by product category/type, or other factor you identify
      1. What does the data recommend for methodology for future tests?
      2. What are minimum design constraints that can be applied (as low as X stores for Y weeks)
      3. How do marketing effects like display and price promotion fit into the test design model?
    2. What else can Mission Field learn about how to improve our models?
  2. Future success prediction
    1. Is there a simple way Mission Field can start to interpret a test in progress against our historical record to know how its performing?
    2. What are the KPIs that are most likely to signal success/failure in a test?
  3. Scorecard development
    1. How can Mission Field create a framework for defining success of new consumer products, no matter what the category of sales, based on our historical database of experience?
  4. Develop a plan by which future tests (currently running at 6-10 tests per year including another 15-25 tests of innovation) can be embedded into the methodology of analysis and allow us to update the overall analytics annually
    1. Comparison – how did this perform vs past tests?
    2. Data refinement – how does this new test (or block of tests performed in the next 6 months) change our prior thinking?
    3. Tighter predictability – making a larger data set to help us get more and more refined

 

Services Required

 

  1. Evaluate the statistical power of the existing experiment results.
  2. Using the existing experiment results, extrapolate/refine the methodology to ensure statistical power of future experiments (possible deliverable is a table with number of store locations by number of weeks in the marketplace populated with a measure of statistical power such as p-value or confidence interval at 95%). Develop ideas for improvement in design and measurement of controlled experiments and the impact of the new approaches on statistical power/reliability/repeatability of experiment results.
  3. Using the existing experiment results, create an MVP predictive model for evaluation of product success at check points while the experiment is in progress (i.e. 4 to 6 weeks into a 12 week experiment) and at the end of the experiment. 
  4. Develop and populate scorecard for each UPC tested. Potentially use scorecard to develop success prediction/early assessment criteria.
  5. Participate in check points during the project to confirm direction and timing are on track and to answer/discuss any questions that come up once you are involved in the data.
  6. After gaining understanding of data (i.e. near end of project), participate in 2-3 hour work session with marketing team to contribute to formulating client-facing product innovation evaluation paradigm (team will provide examples of desired end state prior to the session).
  7. At conclusion of project, submit plan for continuing to build out the MVP model created in item 3 above (identify opportunities and recommend changes in data collection process or cleaning, changes test and control store assignment methodology, additional advanced analytical opportunities available once certain amount of additional data is acquired).

About the Data

 

  • Housed in one Azure cloud database (SQL)
  • The total database is about 425 GB
  • Contains 15 discrete tests in various product categories, some contain related product lines in the same category, most contain multiple UPCs, and some have an A/B test component of price, size or packaging
  • Each test dataset contains 50 fields and ranges from 91,000 to 545,000 records
  • Contains a qualitative success measure (ranked from 1 to 5) for each UPC tested
  • The number of retail locations and weeks of sales varies.  The field ‘test week’ enables looking across tests by eliminating the disparate calendar dates.

 

Proven Expertise

  • Advanced degree (MS or PhD or equivalent) in statistics or closely related field along or with 5+ years practical experience in statistical and methodological experience in design and evaluation of experiments, preferably in a consumer-facing industry
  • An understanding of the key concepts of consumer behavior
  • Experience with A/B testing and control group concepts
  • Proven expertise with data handling, processing, statistical and analytical skills
  • Proven proficiency in applying advanced statistical methods
  • Experience using modern data analysis tools (such as R, Python, Hive/Presto)
  • Familiarity with consumer products analytics, the effect of promotions on sales, and CPG industry evaluation terms like $ share and unit share.
  • Ability to analyze data from a wide variety of research designs
  • Ability to think creatively and provide thoughtful, agile insights
  • Ability to deal creatively with ambiguity, draw conclusions and make recommendations in the face of unknowns
  • Project management skills to complete project phases on time and within budget according to a written plan of work.

Tags: A/B testing Agile Azure Data analysis KPIs Machine Learning Mathematics PhD Python R Research SQL Statistics Testing

Perks/benefits: Career development

Regions: Remote/Anywhere North America
Country: United States
Job stats:  98  7  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.