Key Responsibilities
1. Product Strategy & Vision :
Define the product vision, roadmap, and success metrics for :
- An AI Red Teaming platform for adversarial testing, bias detection, and vulnerability simulation in LLMs and ML models
- An AI GRC platform that maps regulations (e.g., EU AI Act, NIST AI RMF) to standards, policies, and controls
- Monitor AI and cybersecurity trends, regulatory developments, and emerging standards to ensure alignment and differentiation
2. Market Research & Competitive Intelligence :
Conduct deep market research on competing platforms in red teaming, AI security testing, and AI GRC (e.g., HiddenLayer, Lakera, Credo AI, Holistic AI, etc.)Analyze analyst reports (Gartner, Forrester, etc.) and monitor industry trends, regulations, and emerging threatsGather competitive intelligence through demo participation, webinars, community forums, and product teardownMaintain a landscape matrix of competing features, positioning, and gaps3. Execution & Delivery :
Prioritize features and manage the backlog in collaboration with engineeringTranslate technical and regulatory requirements into intuitive product flowsLead release planning and post-launch iteration with cross-functional teams4. Customer Discovery & Adoption :
Conduct user research and customer interviews to gather product feedbackLead design partner engagements and early adopter pilotsPartner with sales and customer success to support adoption and with :AI / ML teams on risk evaluators and model testing pipelinesSecurity teams to define red teaming test coverage and response workflowsLegal / compliance stakeholders to align GRC features with NIST, ISO, and EU AI Act requirementsCore Requirements :
36 years of product management experience in B2B SaaS, AI / ML, cybersecurity, or compliance domainsExperience launching and scaling technical products with cross-functional engineering teamsProven ability to write PRDs, user stories, and translate customer needs into solutionsStrong written and verbal communication; comfort working with legal, security, and AI research functionsStrong understanding of :
AI / ML systems, model lifecycle risks, GenAI use casesGovernance frameworks (NIST AI RMF, ISO / IEC 42001, EU AI Act)Red teaming methods (bias probing, prompt injections, jailbreaks, output risk evaluation)Preferred (Bonus) Skills :
Experience with AI red teaming or adversarial testing tools (e.g., DeepEval, TruLens, Ragas)Familiarity with LLMOps and open-source GenAI frameworks (e.g., LangChain, AutoGen)Background in SaaS architecture (control plane vs. application plane, multi-tenancy)Understanding of threat modeling, AI explainability, and compliance toolingref : hirist.tech)