Data Engineer (Python)
US - Remote
Applications have closed
Nayya
Interacting with benefits is confusing. Transform open enrollment. Personalize decisions. Welcome to the benefit experience your employees deserve.About Nayya
At Nayya, we believe there’s a better way to choose benefits. A more transparent, less confusing way for employees to control their health and financial potential. Powered by billions of data points and machine learning, our benefits experience platform delivers personalized decision support and guidance during open enrollment, new employee onboarding, qualifying life events, and in the moments that matter all year round. This is one of the most stressful and challenging situations consumers face – and we see that as an opportunity to build an innovative response that can help millions of Americans possess the control and understanding they deserve.
Your Role
Work directly with the Nayya health data infrastructure team to power a rich user experience by aiding in data extraction from external sources like APIs, bulk file ingesting, and web scraping. In addition to maintaining and optimizing data extraction integrations, you will be responsible for assisting with improving overall architecture as we scale storage and data access for a growing user base.
You will also help improve the usability of our tools by documenting technology research, solution designs, and writing implementation plans for releasing features into test and production.
Responsibilities
- Work with a small team of engineers to maintain existing integrations and build new integrations to third-party data sources.
- Maintain and improve an existing Python tech stack with a focus on security and scalability.
- Work with the data science team to build and maintain data and deployment pipelines.
- Recommend monitoring and analytics tools to automate common data needs and visibility.
Requirements
- Three or more years of python experience
- Experience with AWS cloud services
- Experience building and maintaining APIs (Flask, FastAPI, etc).
- SQL knowledge for intermediate to advanced querying techniques
- Capacity to work remotely and communicate effectively in a distributed environment
Nice to haves:
- Familiarity with web drivers such as Selenium, Puppeteer, etc.
- ETL development/maintenance experience
- Experience with Amazon RDS and big data storage
- Experience with document stores (cloud storage, document databases, etc)
- Experience with other languages such as Go, Ruby, and Javascript
- Experience with Terraform
#LI-KA1
#LI-REMOTE
COVID-19 Policy: We require proof of vaccination and booster for COVID-19, except those with medical or religious exemptions.
Why Join Nayya?
- Be an early employee of a quickly growing, VC backed start up - grow with us!
- Have a meaningful impact on a platform that is scaling very rapidly
- Contribute to a values-based culture with an emphasis on empowerment and autonomy
- Work in a highly collaborative, cross-functional environment
- Benefits include: Competitive pay, employer-paid healthcare, stock options
Nayya is proud to be an Equal Employment Opportunity employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture AWS Big Data ETL Flask JavaScript Machine Learning Pipelines Python Research Ruby Security SQL Terraform
Perks/benefits: Career development Competitive pay Equity Health care Startup environment Team events
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open MLOps Engineer jobs
- Open Lead Data Analyst jobs
- Open AI Engineer jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Principal Data Engineer jobs
- Open Data Manager jobs
- Open Power BI Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Data Scientist II jobs
- Open Business Intelligence Developer jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Data Quality Analyst jobs
- Open Big Data Engineer jobs
- Open Business Data Analyst jobs
- Open Data Analyst Intern jobs
- Open ETL Developer jobs
- Open Principal Data Scientist jobs
- Open Research Scientist jobs
- Open Data Product Manager jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open Data management-related jobs
- Open ML models-related jobs
- Open Privacy-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open Snowflake-related jobs
- Open PhD-related jobs
- Open NLP-related jobs
- Open CI/CD-related jobs
- Open Data governance-related jobs
- Open Kubernetes-related jobs
- Open Databricks-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open LLMs-related jobs
- Open Generative AI-related jobs