Data Engineer

Lviv, Ukraine

Very Good Security, Inc. logo
Very Good Security, Inc.
Apply now Apply later

Posted 1 week ago

At Very Good Security (“VGS”) we are on a mission to protect the world’s sensitive data - and we’d love to have you along for this journey. VGS was founded by highly successful repeat entrepreneurs and is backed by world-class investors like Goldman Sachs, Andreessen Horowitz, and Visa. We are building an amazing global team spread across four cities. As a young and growing company, we are laser-focused on delighting our customers and hiring talented and entrepreneurial-minded individuals.
We’re looking for a Senior Data Engineer with an equal flair for creative problem solving, new technologies enthusiasm, and desire to contribute to product development.


  • 5+ years of software development experience- ideally at a product company.
  • 3+ years of experience architecting, building and supporting scalable as well as fault-tolerant batch, real-time and/or near-real-time data pipelines. 
  • 3+ years of experience working with big data ecosystem tools such as Kafka, Protobuf/Thrift, and Spark/Flink/Storm 2.0.
  • 3+ years experience with data-flow programming tools such as Apache NiFi, Apache Beam, etc.
  • Strong knowledge of data modeling experience with both relational and NoSQL databases
  • Hands-on experience with data warehouses, preferably AWS Redshift
  • Expert knowledge of SQL and Python. 
  • Knowledge and practical experience with Docker, Terraform/CloudFormation, and the AWS stack: EC2, Kinesis, Lambda, etc.
  • Ability to work independently to deliver well-designed, high-quality, and testable code on time.
  • English - upper-intermediate/advanced

Would be a plus:

  • Java or Golang experience
  • Experience working with CI/CD tools (such as CircleCI, Jenkins, etc.).
  • Understanding and hands-on experience with Kubernetes
  • Open source projects on GitHub


  • Create and maintain optimal data pipeline architecture.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Maintain a high culture of code - rigorous testing and automation. Improve test coverage of code that you do not own.
  • Be proactive and innovative. We rely on your feedback to build up the product expertise.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

What’s in it for you:

  • Silicon Valley Experience;
  • 3 weeks of paid vacation and 2 weeks of days off+sick leaves;
  • Hackers’ days;
  • Corporate retreats;
  • Paid lunches and parking;
  • Covering professional learning: conferences, trainings, and other events;
  • Sports activities compensation;
  • English Speaking Club with native speakers;
  • Medical insurance;
  • VGS stock options.
Job tags: AWS Big Data Java Kafka Kubernetes NoSQL Open Source Python Redshift Security Spark SQL
Share this job: