Lead Data Engineer

United States, Remote

Very Good Security, Inc. logo
Very Good Security, Inc.
Apply now Apply later

Posted 1 month ago

At Very Good Security (“VGS”) we are on a mission to protect the world’s sensitive data- and we’d love to have you along for this journey. 
VGS was founded by highly successful repeat entrepreneurs, and is backed by world-class investors like Goldman Sachs, Andreessen Horowitz, and Visa. We are building an amazing global team spread across four cities. As a young and growing company, we are laser-focused on delighting our customers and hiring talented and entrepreneurial-minded individuals.

What you get to do:

  • Develop and automate large scale, high-performance data processing systems to drive VGS’s business growth and improve the product experience
  • Define data engineering architecture, tooling, and standards across the company
  • Understand and influence logging to support our data flow, architecting logging best practices where needed
  • Engage in all phases of the software lifecycle-  design, implement, test, deploy, and support services in production.
  • Maintain a culture of code quality through rigorous testing,  automation, and code reviews.
  • Be proactive and innovative- we rely on your feedback to build a world-class product.
  • Be a part of a team that believes in the core values of transparency, collaboration, grit, and humility; in going above and beyond what is required in order to do the right thing for our customers and the company; and in having fun while doing all this!

What you bring to the role:

  • BS or equivalent, and a minimum of 5+ years of software development experience- ideally in a SaaS/product development company
  • Experience in architecting, building and supporting scalable as well as fault-tolerant batch, real-time and/or near-real-time data pipelines 
  • Experience working with big data ecosystem tools such as Kafka, Protobuf/Thrift and Spark/Flink/StormExperience with data-flow programming tools such as Apache NiFi, Apache Beam, etc. 
  • Strong knowledge of data modeling experience with both relational and NoSQL databases and hands-on experience with data warehouses, preferably AWS Redshift
  • Expert knowledge of SQL and Python
  • We follow modern DevOps practices, and experience with Docker, Kubernetes, Ansible, Terraform will be a plus 
  • An understanding of how to build for scale, but able to make pragmatic choices to move quickly. You have a strong bias for action
  • Comfortable working in roles that at times may be customer-facing
  • Excellent written and verbal communication skills 
  • Experience successfully working with distributed teams across time zones will be a plus
Job tags: AWS Big Data Engineering Kafka Kubernetes NoSQL Python Redshift Security Spark SQL