Principal Member Technical Staff (Platform- Data Analytics)
Hyderabad
Model N
Model N's leading cloud-based revenue management solutions for high tech and life sciences industries allow companies to impact their top-line. Maximize every revenue moment.Responsibilties
- Analyze user needs & develop technical software solutions for middle tier and information integration layer of product, including requirements gathering, design, modeling, development, testing, deployment and documentation.
- Take ownership of design and development of enterprise scale data pipelines within a modern data management framework collaborating with other stakeholders.
- Determine operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions.
- Develop deep understanding of various platform modules, including business domain knowledge. Demonstrate the end-to-end scenarios/use-cases for these platform modules.
- Push the boundaries for our platform in both technology architecture, ease of developing features/products and extensibility.
- Take initiative to study, analyze and recommend innovative technology components that help differentiate our products.
- Provide technical leadership and be hands-on to design and implement new technology solutions to integrate existing/new data assets or solve business problems in our products in a scalable manner.
- Collaborate with the team to design development standards and methodologies.
- Ensure engineering process is followed for each release supported by epic/story grooming, estimation, design specs, unit/integration tests, code reviews etc.
- Work with management and technical support to swiftly address any high priority issues and release fixes.
- Build team strength by knowledge sharing and providing challenging opportunities to improve/extend skills.
Qualifications
- 9+ years of relevant software development experience.
- 5+ years of hands-on experience with development based on Micro service architecture using Java, Spring Boot.
- 2-3 years of hands-on experience in modeling and designing schema for data lakes.
- Strong hands-on experience with Apache Spark programming and other big data tech & Hadoop ecosystem like Presto, Hive
- Strong understanding of distributed data processing concepts like batch/incremental processing, data partitioning, bucketing, distributed joins and aggregation, Map/Reduce, file formats etc..
- Experience with streaming frameworks like Kafka, Spark Streaming
- Experience in search engine concepts like Apache SOLR, Elastic search.
- Good understanding of scaling Big Data applications, workload management in multi-tenant environments and building fault-tolerant systems.
- Familiarity with AWS services
- Nice to have:
- Experience with Agile methodologies a plus.
- Containerization technologies like Docker and K8.
- Familiarity with other cloud vendor services, like Azure, GCS technologies..
- Good understanding of enterprise software product development and SDLC
- A quick learner, self-motivator and ability to work in a team environment (offshore and onshore)
- Ability to work on aggressive schedules
- Strong problem-solving acumen
- Good communications skills
- BE/Btech or ME/MTech
We’re constantly growing and may have something for you later on if this is not the right opportunity for you. Check out our career site to learn more about Model N or view other jobs: https://www.modeln.com/company/careers/
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture AWS Azure Big Data Data Analytics Data management Data pipelines Docker Engineering Hadoop Java Kafka Pharma Pipelines SDLC Spark Streaming Testing
Perks/benefits: Career development Startup environment
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open AI Engineer jobs
- Open Data Science Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open MLOps Engineer jobs
- Open Data Manager jobs
- Open Data Engineer II jobs
- Open Power BI Developer jobs
- Open Sr Data Engineer jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Business Intelligence Developer jobs
- Open Data Scientist II jobs
- Open Junior Data Scientist jobs
- Open Product Data Analyst jobs
- Open Senior Data Architect jobs
- Open Business Data Analyst jobs
- Open Sr. Data Scientist jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Manager, Data Engineering jobs
- Open Azure Data Engineer jobs
- Open Junior Data Engineer jobs
- Open Data Product Manager jobs
- Open Data Quality Analyst jobs
- Open Principal Data Scientist jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open GCP-related jobs
- Open Data management-related jobs
- Open Java-related jobs
- Open Privacy-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open Kubernetes-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open LLMs-related jobs
- Open Airflow-related jobs
- Open Data warehouse-related jobs
- Open Hadoop-related jobs
- Open Databricks-related jobs