Red Teaming and AI Assurance Expert
We are seeking highly analytical professionals with experience in Red Teaming, prompt evaluation, and Quality Assurance of AI / LLM systems.
- Conduct rigorous exercises to identify vulnerabilities and risks in AI-generated content, ensuring accuracy, bias, and toxicity mitigation.
- Evaluate and stress-test prompts across multiple domains to uncover potential failure modes, collaborating with data scientists and safety researchers.
- Develop test cases to assess performance, security, and reliability in AI-generated responses.
Requirements :
Proven expertise in AI red teaming, LLM safety testing, or adversarial prompt design.Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.Strong background in Quality Assurance, content review, or test case development for AI / ML systems.Key Responsibilities :
Our ideal candidate will work collaboratively with cross-functional teams to ensure the highest quality and integrity of AI / LLM systems. If you have a passion for critical thinking and problem-solving, we encourage you to apply for this exciting opportunity.