Senior Software Engineer (ML/Data Ops)
Remote - Austin, Texas, United States
The sudden rise in new remote work models has triggered an increase in the adoption of collaboration tools, an acceleration of digital transformations, and the need for visibility into how work gets done. Legacy metrics for assessing productivity in the workplace alone, are no longer sufficient for making the right business decisions in today’s reality. Companies must redefine their tech stack and data-driven dashboards to gain insights into how to inspire employees, boost team efficiency and promote positive outcomes-- all while building a culture based on trust and transparency. Redefining the measurement of performance in the workplace is one of the most important things an organization can do to modernize their workplace. And at ActivTrak, we’re taking on this challenge.
ActivTrak is a product-led, innovative software company that introduced it’s award-winning workforce analytics platform in 2015. Our cloud-based platform provides productivity insights into how teams work, improving employee and customer experience, while also enabling better business outcomes. At ActivTrak, we recognize the powerful link between these two concepts and we're on a mission to understand it more every day. Alongside 8500+ paying customers, our team uses the platform internally to assess team and organizational performance, hone product features, develop best practices, streamline processes, invest in new innovations, and promote a culture of immediate feedback and transparency.
We are a fast-growing, agile company with a forward-thinking, inclusive culture. Our teams are encouraged to collaborate daily to solve challenges, create and champion new ideas, and execute initiatives that help global customers and their modern workforces succeed by working better together. We’ve grown our team to over 100 people, raised over 70M money from some of the best venture capital firms around like Sapphire Ventures and Elsewhere Partners, and have experienced triple digit growth with ARR in eight figures. Customers love our product—with an average rating of 4.7/5 stars across 800+ online reviews.
What you will do:
As a senior engineer working on our Data Science team, you will be responsible for building/managing the data and feature pipeline for our data science/ML initiatives. This entails sourcing data, converting data into features, and managing these features and the models that employ them as part of our feature store infrastructure (feature engineering). You will have a background in building data pipelines for the purpose of collecting, cleansing, and transforming data, for use in the initiatives that bring unique insights to our users. As part of the larger ActivTrak team, you will be working in a close-knit team that is expected to code to scale to hundreds of millions of events per day, leverage our petabyte+ of existing data, and support our 150,000+ users as we disrupt the productivity analytics industry. You will be collaborating with teams across engineering and the business to help provide insights and answers to our customers’ most pressing questions.
You are pragmatic. Working in a start-up environment requires that you’ll be able to make the right trade-offs to progress projects while balancing constraints and input. We’re looking for a candidate that can be pragmatic and can make decisions based on data.
You care deeply about your own productivity. Through understanding your productivity and the productivity of those around you, you gain better insight into our customers’ challenges. We’re looking for someone who wants to talk openly about their own productivity successes for innovation and performance elevation.
Focus on clear and open communication. Coordinating with so many teams means you’ll be responsible for a lot of moving parts. You’ll need to be proactive with status updates that have the appropriate level of detail for the audience.
- Experience building ETL pipelines in Python or R
- Experience bringing Machine Learning to production at scale
- Experience with batch and stream data processing
- Demonstrably strong SQL
- Proficient with pandas or dplyr
- Parallel dataframes with Dask or Spark
- Experience with feature engineering, standardization, version and storage
- API design/implementation (e.g. Microservices, REST, etc)
- Experience in cloud environments (e.g. Google Cloud Platform, AWS)
- Docker/Containers, Kubernetes
- Serverless deployment scenarios
- Agile development
- Knowledge of SDLC and best practices
- Experience working with CI/CD systems
- Software source control system, such as Git
- Position is remote within US
- Minimal travel
- Limited physical demands
We have seven foundational values that are core to who we are and how we work:
- Customer-focused: Our customers are the lifeblood of the business
- Respectful: Treat everyone with respect, decency and kindness
- Innovative: Be bold; experiment and learn/fail fast
- Data-driven: Measure what matters most
- Open and direct: Engage is open and direct dialogue across teams
- Accountable: Be accountable to each other, our customers, our partners and yourself.
- Execution-oriented: We value the spirit of debate, new ideas and fast decision-making
This is an incredible opportunity to embark on an exciting journey with an early-stage, dynamic VC-backed company. If you have a positive attitude towards urgency, risk, and challenges that comes with working in a startup environment, then you will be a great fit! To see the many faces of ActivTrak, visit https://activtrak.com/our-team/.
ActivTrak is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. ActivTrak does not discriminate in employment on the basis of race, color, religion, sex, national origin, political affiliation, sexual orientation, marital status, disability, age, protected veteran status, gender identity or any other factor protected by applicable federal, state or local laws.