Software Engineer III, Big Data

San Francisco, California, United States

Applications have closed

6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do sales and marketing. It works with big data at scale, advanced machine learning and predictive modeling to find buyers and predict what they will purchase, when and how much.    

6sense helps B2B marketing and sales organizations fully understand the complex ABM buyer journey. By combining intent signals from every channel with the industry’s most advanced AI predictive capabilities, it is finally possible to predict account demand and optimize demand generation in an ABM world. Equipped with the power of AI and the 6sense Demand Platform™, marketing and sales professionals can uncover, prioritize and engage buyers to drive more revenue.  

https://techcrunch.com/2021/03/30/6sense-raises-125m-at-a-2-1b-valuation-for-its-id-graph-an-ai-based-predictive-sales-and-marketing-platform/  

A Core Data Software Engineer at 6sense will have the opportunity to  

  • Apply distributed computing and map-reduce knowledge to compute, dedupe, and derive insights from billions of records daily 
  • Invent noise-tolerant algorithms that improve data quality and coverage 
  • Work on scaling issues to 10x our data-handling capability over the coming year 
  • Write custom UDFs, UDAFs, UDTFs to simplify complex operations 
  • Design and implement tools to scrape data, validate data, and semi-automate human feedback. 
  • Significantly strengthen your existing skills in statistics, data analysis, and programming. 
  • Intellectually contribute to the software, data, process, and growth of other team members 

Required qualifications and must have skills 

  • 3+ years of professional, recent coding experience in Java (ready to code in week 1) 
  • 1+ years of professional coding experience in Python 
  • Experience with SQL with an understanding of joins, group by and analytic functions (ready to analyze data in week 1) 
  • Comfortable with Unix / Linux command line 
  • Analytical and problem-solving skills 
  • Familiarity with basic statistics and histograms 

Nice to have Skills  

  • Understanding of the map-reduce paradigm 
  • Familiarity with Django, Django admin, Django REST Framework 
  • Experience with Big Data Platforms like Hadoop / Hive / Spark 
  • Experience with writing Hive / Presto UDFs in Java 
  • Advanced SQL on Hadoop concepts like partitioning, clustering and skewed joins 
  • Familiarity with docker and container platforms like Mesos and Kubernetes  

Interpersonal Attributes 

  • You can work independently but coordinate effectively with your team 
  • You take ownership of projects and drive them to conclusion  
  • You’re a good communicator and are capable of not just doing the work, but teaching others and explaining the “why” behind complicated technical decisions  
  • You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll want you to evolve with it! 

Tags: Big Data Data analysis Django Docker Hadoop Kubernetes Linux Machine Learning Predictive modeling Python Spark SQL Statistics

Perks/benefits: Career development

Region: North America
Country: United States
Job stats:  0  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.