DataOps Engineer

San Pedro Garza García, Mexico

Applications have closed

Spectrum Effect

Spectrum-NET helps mobile operators drive RF interference mitigation, improve network performance, and maximize spectrum value.

View company page

Company Description

Are you a detailed-oriented self-starter that possess a high level of technical curiosity? Are you driven to become an expert in the design and implementation of data pipelines? Are passionate to ensure the optimal software deployment for our customers’ needs? Do you want to be part of an exciting scale-up with massive upside potential? Come and join us at Spectrum Effect!

Spectrum Effect’s mission is to solve the most challenging and costly problems in the wireless industry through innovation and automation. Our team is passionate about creating disruptive technologies, developing solutions with engineering excellence, and delivering substantial value to our customers. Protected by 30 patents and deployed by leading mobile operators across the globe, our Spectrum-NET software solution performs automated ML-driven analysis of radio access networks. Spectrum-NET is a cloud-native, horizontally scalable solution based on a Kubernetes-orchestrated microservices architecture.

Our 50 team, located in San Pedro Garza García, México, enjoy ownership in our private company through stock options and very competitive salaries. This is an amazing opportunity to join an emerging leader in the ML-driven automation space and make a profound impact on the mobile industry.

Job Description

As a DataOps you would be responsible for developing and maintaining nifi pipelines, overseeing day-to-day operations with ETL pipelines. You would be working with XML and CSV data auditing, and data transformation of Excel, CSV, XML, YAML, JSON files through scripting. Also, you would be creating/defining connections to our client’s endpoints such as SQL-like databases, data lakes, and other sources.

Responsibilities

  • Design and create data pipelines using Apache NiFi, Python, and Apache Kafka.
  • Experience integrating different data sources for extract, transform and load (e.g. SQL like databases, data lakes, XML, CSV, etc).
  • Monitor data processing steps via Kibana + Elasticsearch and alert team members to data anomalies.
  • Maintain and optimize existing data pipelines to reduce inefficiencies, improve throughput and reliability, and optimize hardware resource usage.
  • Automate repeated data management tasks to reduce toil.
  • Provide feedback and improvement ideas to the software development data pipeline team to continually improve performance and usability.
  • Provide initial troubleshooting of data processing errors by reviewing service logs, hardware alarms, DB health, and resource usage.
  • Provide data processing status updates and maintain historical records of system performance.

Qualifications

What you need to have:

  • Bachelor’s Degree in Computer Science, Engineering, or a related field.
  • Apache NiFi, Apache Kafka, and Python experience.
  • Kibana and Elasticsearch or other monitoring tool experience.
  • Hardware monitoring experience.
  • Linux shell and command line scripting experience.
  • AWS Cloud experience.
  • Basic Kubernetes experience.

Additional Information

Thinking about advancing your career to the next level? Do you have what it takes to successfully lead a software organization?

Apply now! Nothing ventured, nothing gained.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture AWS Computer Science CSV Data management DataOps Data pipelines Elasticsearch Engineering ETL Excel JSON Kafka Kibana Kubernetes Linux Machine Learning Microservices NiFi Pipelines Python SQL XML

Perks/benefits: Career development Equity

Region: North America
Country: Mexico
Job stats:  11  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.