Forward Deployed Data Engineering Contractor - TS/SCI Full Scope Polygraph

Washington, DC

Sayari

Get instant access to public records, financial intelligence and structured business information on over 455 million companies worldwide.

View company page

About Sayari: Sayari is the counterparty and supply chain risk intelligence provider trusted by government agencies, multinational corporations, and financial institutions. Its intuitive network analysis platform surfaces hidden risk through integrated corporate ownership, supply chain, trade transaction and risk intelligence data from over 250 jurisdictions. Sayari is headquartered in Washington, D.C., and its solutions are used by thousands of frontline analysts in over 35 countries.
Our company culture is defined by a dedication to our mission of using open data to enhance visibility into global commercial and financial networks, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you like working with supportive, high-performing, and curious teams, Sayari is the place for you.
JOB RESPONSIBILITIES
Working with government customers in the DC area to help them ETL their data into a format which is usable by Sayari’s on premise offeringWorking with customers pre-sales to help them design solutions focused around Sayari’s product offerings for Entity Resolution and bulk dataWorking with customers post-sale to ensure that they are getting value from Sayari’s bulk data productManaging the process of producing customized bulk data products for customers and bulk data samples for prospective customers

Required Skills & Experience:

  • Holds a TS/SCI Full Scope Polygraph clearance and experience working in classified environments.
  • Professional experience with Python and a JVM language (e.g., Scala) 
  • 4+ years of experience designing and maintaining ETL pipelines 
  • Experience using Apache Spark
  • Experience with SQL (e.g., Postgres) and NoSQL (e.g., Cassandra, ElasticSearch, etc.)databases 
  • Experience working on a cloud platform like GCP, AWS, or Azure 
  • Experience working collaboratively with git

Desired Skills & Experience:

  • Understanding of Docker/Kubernetes 
  • Understanding of or interest in knowledge graphs
  • Experienced in supporting and working with internal teams and customers in a dynamic environment 
  • Passionate about open source development and innovative technology 
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: AWS Azure Cassandra Docker Elasticsearch Engineering ETL GCP Git Kubernetes NoSQL Open Source Pipelines PostgreSQL Python Scala Spark SQL

Perks/benefits: Career development

Region: North America
Country: United States
Job stats:  3  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.