DataOps Engineer
San Pedro Garza García, Mexico
Applications have closed
Spectrum Effect
Spectrum-NET helps mobile operators drive RF interference mitigation, improve network performance, and maximize spectrum value.Company Description
Are you a detailed-oriented self-starter that possess a high level of technical curiosity? Are you driven to become an expert in the design and implementation of data pipelines? Are passionate to ensure the optimal software deployment for our customers’ needs? Do you want to be part of an exciting scale-up with massive upside potential? Come and join us at Spectrum Effect!
Spectrum Effect’s mission is to solve the most challenging and costly problems in the wireless industry through innovation and automation. Our team is passionate about creating disruptive technologies, developing solutions with engineering excellence, and delivering substantial value to our customers. Protected by 30 patents and deployed by leading mobile operators across the globe, our Spectrum-NET software solution performs automated ML-driven analysis of radio access networks. Spectrum-NET is a cloud-native, horizontally scalable solution based on a Kubernetes-orchestrated microservices architecture.
Our 50 team, located in San Pedro Garza García, México, enjoy ownership in our private company through stock options and very competitive salaries. This is an amazing opportunity to join an emerging leader in the ML-driven automation space and make a profound impact on the mobile industry.
Job Description
As a DataOps you would be responsible for developing and maintaining nifi pipelines, overseeing day-to-day operations with ETL pipelines. You would be working with XML and CSV data auditing, and data transformation of Excel, CSV, XML, YAML, JSON files through scripting. Also, you would be creating/defining connections to our client’s endpoints such as SQL-like databases, data lakes, and other sources.
Responsibilities
- Design and create data pipelines using Apache NiFi, Python, and Apache Kafka.
- Experience integrating different data sources for extract, transform and load (e.g. SQL like databases, data lakes, XML, CSV, etc).
- Monitor data processing steps via Kibana + Elasticsearch and alert team members to data anomalies.
- Maintain and optimize existing data pipelines to reduce inefficiencies, improve throughput and reliability, and optimize hardware resource usage.
- Automate repeated data management tasks to reduce toil.
- Provide feedback and improvement ideas to the software development data pipeline team to continually improve performance and usability.
- Provide initial troubleshooting of data processing errors by reviewing service logs, hardware alarms, DB health, and resource usage.
- Provide data processing status updates and maintain historical records of system performance.
Qualifications
What you need to have:
- Bachelor’s Degree in Computer Science, Engineering, or a related field.
- Apache NiFi, Apache Kafka, and Python experience.
- Kibana and Elasticsearch or other monitoring tool experience.
- Hardware monitoring experience.
- Linux shell and command line scripting experience.
- AWS Cloud experience.
- Basic Kubernetes experience.
Additional Information
Thinking about advancing your career to the next level? Do you have what it takes to successfully lead a software organization?
Apply now! Nothing ventured, nothing gained.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Computer Science CSV Data management DataOps Data pipelines Elasticsearch Engineering ETL Excel JSON Kafka Kibana Kubernetes Linux Machine Learning Microservices NiFi Pipelines Python SQL XML
Perks/benefits: Career development Equity
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open AI Engineer jobs
- Open Lead Data Analyst jobs
- Open MLOps Engineer jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Sr Data Engineer jobs
- Open Data Manager jobs
- Open Principal Data Engineer jobs
- Open Power BI Developer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Business Intelligence Developer jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Data Scientist II jobs
- Open Sr. Data Scientist jobs
- Open Manager, Data Engineering jobs
- Open Business Data Analyst jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Data Quality Analyst jobs
- Open Principal Data Scientist jobs
- Open Data Product Manager jobs
- Open Junior Data Engineer jobs
- Open ETL Developer jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open GCP-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Java-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Kubernetes-related jobs
- Open Airflow-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs
- Open LLMs-related jobs
- Open DevOps-related jobs