Sr Quality Engineer (Manual)

Pune, MH, IN

Houghton Mifflin Harcourt

An education technology company, HMH is a leading provider of K–12 core, supplemental, intervention, and professional learning solutions that unlock students’ potential.

View all jobs at Houghton Mifflin Harcourt

Apply now Apply later

Who We Are

At HMH we are a learning company. Our learning platform and solutions help millions of learners to dream big and explore their potential. When you work at HMH, you know that what you do truly has a transformative lifelong impact on people. Over fifty-three million students and teachers use our learning platforms.

 

What we are looking for
We are seeking experienced and motivated candidates for the AI Team at HMH. Candidates must have a genuine interest in honing their Quality craftsmanship and skills, building great automated testing suites, and expanding and contributing to our quality culture. Candidates should be eager to learn AI along with Manual testing responsibilities.

 

Required Qualifications:

  • 5-8 years of experience in manual testing in a product-based environment
  • Strong experience testing data solutions
  • Strong experience writing SQL writing queries.
  • Strong understanding of the software development life cycle (SDLC) and testing life cycle (STLC)
  • Proficiency in designing and executing test plans, test cases, and test scenarios.
  • Solid experience in Jira
  • Solid understanding SQL and database testing
  • Experience with automated testing tools and frameworks.
  • Excellent analytical and problem-solving skills, with the ability to debug and troubleshoot issues.
  • Effective communication and collaboration skills, with the ability to work effectively in a cross-functional team environment and communicate complex technical concepts to non-technical stakeholders. 
  • Self-motivated and proactive, with a passion for quality and continuous improvement in testing methodologies and approaches. 

 

Education & Experience  

Bachelor’s degree in computer science, Information Technology, or a related field

 

Primary Responsibilities

  • Develop and execute test plans and test cases to validate the functionality, accuracy, and performance of large language models (LLMs) applications across different use cases and scenarios.
  • Employ a variety of testing techniques including input fuzzing, adversarial testing, and bias detection to ensure robustness and inclusivity.
  • Design and conduct comparative analyses to evaluate different LLM prompt template designs, prompt strategies, and user interactions, ensuring the most effective and user-friendly implementations are identified.
  • Work closely with cross-functional teams, including developers, data scientists, and product managers, to understand requirements and provide input on design, implementation, and testing strategies.
  • Lead AI safety and responsibility initiatives, ensuring all LLM applications comply with ethical AI principles, data privacy standards, and regulatory requirements.
  • Serve as an internal advocate, educating and advising the team on best practices for ethical AI development and deployment.
  • Develop and maintain metrics and monitoring systems to assess and continuously improve the quality and safety of LLM applications in real-world scenarios.
  • Perform extensive regression, load, and performance testing using Gatling, ensuring the application’s scalability, responsiveness, and optimal function, particularly for LLM components.
  • Identify, document, and prioritize issues, bugs, and inconsistencies in LLM behavior.
  • Leverage debugging tools and techniques to troubleshoot issues related to model predictions and data processing, ensuring timely resolution and mitigation.
  • Ensure the integrity, accuracy, and efficiency of data processing from ingestion to model inference, addressing potential data quality and pipeline scalability issues.
  • Keep abreast of the latest developments in AI ethics, safety research, NLP, ML research, technologies, and quality assurance methodologies.
  • Integrate cutting-edge practices and knowledge into QA processes to enhance testing methodologies and approaches.
  • Communicate test results, findings, and recommendations effectively to stakeholders, providing clear and actionable feedback for improving LLM application quality and performance.
  • Conduct user acceptance testing (UAT) sessions with stakeholders and end-users when required.
  • Communicate effectively with cross-functional teams to clarify requirements and resolve issues.

 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  1  0
Category: Engineering Jobs

Tags: Computer Science Data quality Jira LLMs Machine Learning Model inference NLP Privacy Research SDLC SQL Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.