Head of Data Science and Analytics

San Mateo, CA - Remote

Applications have closed

Plato Systems

The Spatial Intelligence Platform for Manufacturing Operations

View company page

We are a series-A startup building perception systems for autonomy. We are based in the San Francisco Bay Area, funded by NEA, and our core team includes faculty entrepreneurs (Stanford) and industry veterans (Uber, Apple, Amazon Lab126, Rohde & Schwarz), who have successfully shepherded signal processing and machine learning innovations to large-scale software for location improvement and safety at Uber, led the development of state-of-the-art computer vision technologies that shipped over millions of Amazon devices, and delivered zero-to-one product experiences at Uber and Box. Our core product grew out of 5+ years of university R&D by our co-founders. You can find out more about us by visiting our website and our notion page.

Our mission and team expertise spans beyond software to advanced sensor systems, algorithms, embedded systems, signal processing, and machine learning. Our team is building and deploying edge software and cloud services for real-time customer facing products as well as internal big data tools. We look for people with a depth of expertise and experience in one of these areas, and with the intellectual curiosity for interacting with, learning from, and teaching world-class experts in areas outside their expertise.

We currently have a full-time opportunity as Head of Data Science and Analytics at Plato Systems. In this role, you will sit at the intersection of Data science, Engineering, and Product, & work collaboratively with different teams to transform data generated by our fleet of edge devices into data products that are consumable by our end customers.

Responsibilities

  • Manage a team of SW, data science, and algorithm engineers to generate and deliver data products that provide actionable insights to our industrial customers
  • Work with data labeling and data ops teams to ensure QA/QC of data products from our deployed unit.
  • Work with our infrastructure team to ensure required tools, APIs, etc are in place to support reliable scalable deployment of insight generation modules
  • Deliver insightful data products under tight deadlines.
  • Be hands-on in terms of devising requirements for tooling and software development, and performing code reviews
  • Work in a data-driven environment, drive process improvement, and work with the stakeholders to translate high-level business goals to working software solutions and customer-facing outputs.
  • End-to-end ownership in terms of definition, development, evaluation, integration, test, documentation, and ensuring scalability / availability of deployed services.
  • Develop and improve the current data architecture, data quality, monitoring and data availability.
  • Prepare technical roadmaps

Required Qualifications

  • Bachelor's, Master’s, or PhD in Computer Science, applied science, or a related engineering field.
  • 6+ years experience and demonstrated experience in working on data products and/or managing technical sw teams
  • End-to-end ownership experience and prior experience with 0-1 building of tools and platforms
  • Self-starter, methodical, motivated, responsible, innovative and technology-driven person who performs well both solo and as a team member
  • A proactive problem solver that has great communications as well as project management skills to relay findings and solutions across technical and non-technical audiences

Preferred Qualifications

  • Experience with one or more of time-series analysis, statistical analysis, anomaly detection, and computer vision fundamentals
  • Prior demonstrated exposure to data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting / analytics / dashboard tools.
  • Strong skills in Git, Docker, Airflow, real time ETL pipelines.
  • Experience working with data infrastructure (AWS services, etc) and providing framework to rationalize and simplify both real time and batch data pipelines.
  • Prior experience with modern data technologies stack such as Data Bricks, Airflow, or similar solutions.
  • Prior experience with data visualization tools and packages.
  • Comfort in working with business teams to gather requirements and gain a deep understanding of varied datasets

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow APIs Architecture AWS Big Data Computer Science Computer Vision Databricks DataOps Data pipelines Data quality Data visualization Data warehouse Docker ELT Engineering ETL Git Industrial Machine Learning PhD Pipelines R R&D Statistics Teaching

Perks/benefits: Startup environment

Regions: Remote/Anywhere North America
Country: United States
Job stats:  30  4  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.