Sr. Staff Engineer, Data
Taipei, Taiwan
Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security.
Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events (pre and hopefully post-Covid) and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope.
About the role
Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience.
The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products.
We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments.
What's in it for you
- You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics
- Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products
- You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills.
What you will be doing
- Designing and implementing planet scale distributed data platforms, services and frameworks including solutions to address high-volume and complex data collections, processing, transformations and analytical reporting
- Working with the application development team to implement data strategies, build data flows and develop conceptual data models
- Understanding and translating business requirements into data models supporting long-term solutions
- Analyzing data system integration challenges and proposing optimized solutions
- Researching to identify effective data designs, new tools and methodologies for data analysis
- Providing guidance and expertise to the development community in effective implementation of data models and building high throughput data access services
- Providing technical leadership in all phases of a project from discovery and planning through implementation and delivery
Required skills and experience
- 10+ years of hands-on experience in architecture, design or development of enterprise data solutions, applications, and integrations
- Ability to conceptualize and articulate ideas clearly and concisely
- Excellent algorithms, data structure, and coding skills with either Java, Python or Scala programming experience
- Proficiency in SQL
- Experience building products using one from each of the following distributed technologies:
- Relational Stores (i.e. Postgres, MySQL or Oracle)
- Columnar or NoSQL Stores (i.e. Big Query, Clickhouse, or Redis)
- Distributed Processing Engines (i.e. Apache Spark, Apache Flink, or Celery)
- Distributed Queues (i.e. Apache Kafka, AWS Kinesis or GCP PubSub)
- Experience with software engineering standard methodologies (e.g. unit testing, code reviews, design document)
- Experience working with GCP, Azure, AWS or similar cloud platform technologies a plus
- Excellent written and verbal communication skills
- Bonus points for contributions to the open source community
Education
- BSCS or equivalent required, MSCS or equivalent strongly preferred
#LI-SC3
Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate.
Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Azure BigQuery Data analysis Distributed Systems Engineering Flink GCP Java Kafka Kinesis MySQL NoSQL Open Source Oracle PostgreSQL Privacy Python Scala Security Spark SQL Testing
Perks/benefits: Career development Salary bonus Team events Transparency
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Research Scientist jobs
- Open Data Science Manager jobs
- Open Junior Data Analyst jobs
- Open Business Data Analyst jobs
- Open Data Scientist II jobs
- Open Principal Data Scientist jobs
- Open BI Analyst jobs
- Open Sr Data Engineer jobs
- Open Business Intelligence Engineer jobs
- Open Data Science Intern jobs
- Open Sr. Data Scientist jobs
- Open Senior Business Intelligence Analyst jobs
- Open Lead Data Analyst jobs
- Open Software Engineer, Machine Learning jobs
- Open Azure Data Engineer jobs
- Open Junior Data Scientist jobs
- Open MLOps Engineer jobs
- Open Manager, Data Engineering jobs
- Open Data Engineer III jobs
- Open Data Analytics Engineer jobs
- Open Marketing Data Analyst jobs
- Open Junior Data Engineer jobs
- Open Data Engineering Manager jobs
- Open Product Data Analyst jobs
- Open Data Analyst II jobs
- Open Privacy-related jobs
- Open GCP-related jobs
- Open Tableau-related jobs
- Open Excel-related jobs
- Open ML models-related jobs
- Open Data pipelines-related jobs
- Open APIs-related jobs
- Open PhD-related jobs
- Open PyTorch-related jobs
- Open Finance-related jobs
- Open LLMs-related jobs
- Open TensorFlow-related jobs
- Open Deep Learning-related jobs
- Open Data visualization-related jobs
- Open Consulting-related jobs
- Open Business Intelligence-related jobs
- Open Generative AI-related jobs
- Open CI/CD-related jobs
- Open Data governance-related jobs
- Open NLP-related jobs
- Open DevOps-related jobs
- Open Kubernetes-related jobs
- Open Docker-related jobs
- Open Git-related jobs
- Open Hadoop-related jobs