Splunk Data Analytic Subject Matter Expert
Woodlawn, MD, United States
Full Time Senior-level / Expert Clearance required USD 112K - 179K
Peraton
Peraton drives missions of consequence spanning the globe and extending to the farthest reaches of the galaxy. As the world’s leading mission capability integrator and transformative enterprise IT provider, we deliver trusted and highly...Responsibilities
Peraton Global Health and Financial Solutions sector is seeking a Splunk Data Analytic Subject Matter Expert to join our team of qualified, diverse individuals. This position will be located in Woodlawn, MD. The candidate must reside close to be able to go into the office to work a hybrid schedule, when needed.
What you'll do:
This Splunk Data Analytic Subject Matter Expert (SME) will provide optimization of data flow using aggregation, filters, etc. The Splunk Data Analytic SME will be involved in the analysis of unstructured and semi-structured data, including latent semantic indexing (LSI), entity identification and tagging, complex event processing (CEP), and the application of analysis algorithms on distributed, clustered, and cloud-based high-performance infrastructures. The Subject Matter Expert will exercise creativity in applying non-traditional approaches to large-scale analysis of unstructured data in support of high-value use cases visualized through multi-dimensional interfaces. Handles processing and index requests against high-volume collections of data and high-velocity data streams.
Duties and Responsibilities:
- Create a consolidated data set that conforms to the common information model made up of sensor data sources that is already aggregated together and is also already searchable.
- Develop the capability to aggregate all sensor data results based on two main categories: “tangible assets, namely hardware, software, and data” and “Information Systems, groups of assets with a business purpose.”
- Develop the capability to tag new data so that it falls into the Re-Usable data assets model so that IO and CDM dashboard can ingest them.
- Create a way to translate key value pairs from any sensor tools into the format needed to be consumed.
- Transform already good data into the format needed for ingestion by Xacta.IO and CDM Elastic file.
- Create data pipeline and create connections between data source(s) and the Re-Usable data asset model.
- Create connection between Splunk and the Re-Usable data asset model.
- Establish Xacta.IO data pipeline connection with the Re-Usable data asset model.
- Establish CDM Elastic data pipeline connection with the Re-Usable data asset model.
- Develop an integrator between Splunk and Xacta.IO and CDM Elastic.
- Buildout Data Warehouses/ data models:
- Tag Data
- Buildout data pipelines in Splunk
- Establish data pipeline connections
- Develop Integrators/Integrations (between Splunk, DbConnect, Splunk, Xacta)
- Aggregate various types of data
- Create Key Value pairs
- ETL coding
- Buildout Dashboards
- Configure notable event actions, action menus and Adaptive Responses.
- Data onboarding and data ingestion normalization recommendations.
- Strong knowledge of security risk procedures, security patterns, authentication technologies and security attack pathologies.
- Develop, evaluate, and document, specific metrics for management purposes.
- Create Dashboards to monitor the traffic volumes, response times, errors, and warnings across various data centers.
- Monitor the web portals, log files and databases.
- Design and Develop Splunk for routine use.
- Solve complex Integration challenges and debug complex configuration issues.
- Consult with stakeholders to establish, maintain and refresh their strategic direction in cloud adoption.
- Become knowledgeable on the CDM technical requirements for the federal government’s CDM program. Understand your role in CDM activities.
- Involved in a wide range of security issues including architectures, firewalls, electronic data traffic, and network access.
- Design, manage, and maintain enterprise SIEM infrastructure to improve data ingestion processes, including architectural work on data pipelines to ensure optimal flow of data.
Qualifications
Basic Qualifications:
- Bachelor’s degree and 8 years of experience, Master's degree and 6 years of experience, or 12 years of experience in lieu of a degree.
- 4 years’ experience using customer-focused Splunk Data Pipelining SIEM engineering background
- 4 years’ experience in a senior Splunk role working in a Splunk clustered environment supporting SOC or NOC environments
- A minimum of 4 years of experience with:
- In-depth knowledge of designing, upgrading, maintaining, and implementing network devices on a large-scale enterprise
- Direct experience with Splunk Engineering and data integration
- Prior SIEM data modelling experience on similar platform at scale (>50 servers)
- Scripting and development skills in Python/Perl with deep comprehension of regular expressions
- Coordination and communication with other remotely deployed team members
- Developing documentation with processes and procedures
- Proposing, implementing automation features in a large enterprise environment
- 3 years of experience with Linux and SQL/ODBC interfaces
- 2 years of experience with data transport and transformation APIs and technologies such as JSON, XML, XSLT, JDBC, SOAP and REST.
- Hold active Splunk Core Certifications of at least Splunk Architect
- Minimum of 3 year of experience in developing and tailoring reporting from network security tools.
- Must be able to obtain and maintain a US Public Trust clearance.
- US Citizenship is required
Preferred Qualifications:
- Experience with Splunk Common Information Model (CIM) and Enterprise Analytic.
- Strong problem-solving abilities with an analytic and qualitative eye for reasoning under pressure.
- Self-starter with the ability to independently prioritize and complete multiple tasks with little to no supervision.
- Knowledge of Cloud Services such as AWS, Azure, Office365.
- Ability to script in one more of the following computer languages Python, Bash, Visual Basic or Powershell.
- Experience in automating Splunk Deployments and orchestration within a Cloud environment.
Peraton Overview
Peraton is a next-generation national security company that drives missions of consequence spanning the globe and extending to the farthest reaches of the galaxy. As the world’s leading mission capability integrator and transformative enterprise IT provider, we deliver trusted, highly differentiated solutions and technologies to protect our nation and allies. Peraton operates at the critical nexus between traditional and nontraditional threats across all domains: land, sea, space, air, and cyberspace. The company serves as a valued partner to essential government agencies and supports every branch of the U.S. armed forces. Each day, our employees do the can’t be done by solving the most daunting challenges facing our customers. Visit peraton.com to learn how we’re keeping people around the world safe and secure.
Target Salary Range
$112,000 - $179,000. This represents the typical salary range for this position based on experience and other factors.Tags: APIs Architecture AWS Azure Data pipelines Engineering ETL JSON Linux Perl Pipelines Python Security Splunk SQL Unstructured data XML
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Business Intelligence Engineer jobs
- Open Data Science Intern jobs
- Open Data Engineer II jobs
- Open Lead Data Analyst jobs
- Open Data Science Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open Marketing Data Analyst jobs
- Open Junior Data Scientist jobs
- Open Data Scientist II jobs
- Open MLOps Engineer jobs
- Open Business Data Analyst jobs
- Open Business Intelligence Developer jobs
- Open Data Analytics Engineer jobs
- Open Product Data Analyst jobs
- Open Data Analyst Intern jobs
- Open Sr Data Engineer jobs
- Open Sr. Data Scientist jobs
- Open Principal Data Scientist jobs
- Open Research Scientist jobs
- Open Big Data Engineer jobs
- Open Senior Data Architect jobs
- Open Data Engineering Manager jobs
- Open Junior Data Engineer jobs
- Open Azure Data Engineer jobs
- Open Data Quality Analyst jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open PhD-related jobs
- Open Deep Learning-related jobs
- Open Data visualization-related jobs
- Open NLP-related jobs
- Open Finance-related jobs
- Open PyTorch-related jobs
- Open TensorFlow-related jobs
- Open LLMs-related jobs
- Open APIs-related jobs
- Open Generative AI-related jobs
- Open Snowflake-related jobs
- Open CI/CD-related jobs
- Open Consulting-related jobs
- Open Hadoop-related jobs
- Open Kubernetes-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open Databricks-related jobs