About the Role :
We are urgently seeking a highly experienced and multifaceted AI Architect with a strong background in Security Testing and Red Teaming. This critical role is responsible for embedding security-by-design principles throughout our AI and Machine Learning (ML) development lifecycle, from initial design through to deployment, with a particular focus on complex, agentic AI systems.
The ideal candidate will be a technical leader who can provide consultative guidance, define robust security guardrails, and ensure our AI roadmap aligns with evolving security standards and regulatory requirements.
Job Location : Bangalore, India
Years of Experience : 8 - 15 Years
Key Responsibilities :
- AI Security Architecture & Strategy Primary Function : Provide expert consultative guidance and technical leadership during the design, planning, and development stages of new AI products, features, and services.
- Security-by-Design : Embed robust security-by-design principles, methodologies, and secure coding practices directly into the AI / ML development lifecycle (MLSecOps).
- Secure Deployment : Advise on and design secure deployment patterns for AI / ML models and infrastructure, with a critical focus on the unique security challenges posed by agentic AI (AI agents capable of independent action).
- Risk Mitigation : Recommend, define, and implement technical guardrails and control mechanisms to mitigate AI-specific risks, including prompt injection, data poisoning, model evasion, and the risk of unintended or harmful consequences.
- Roadmap Alignment : Support the long-term AI roadmap by aligning security practices, technologies, and governance with evolving industry standards, domestic, and international security and privacy regulations.
- Security Testing & Red Teaming : Leadership Lead and execute specialized security testing and vulnerability assessments on AI / ML models, pipelines, and underlying infrastructure. Spearhead AI Red Teaming exercises to proactively identify and exploit vulnerabilities unique to LLMs and deep learning systems, simulating real-world attacker scenarios.
- Develop custom tools and methodologies for testing the resilience and robustness of AI models against adversarial attacks. Document and present findings from security assessments and red team operations to technical and executive stakeholders, offering clear, actionable remediation strategies.
- Cross-Functional Expertise & Collaboration : Serve as the security liaison between Data Science, Engineering, Product Management, and IT Security teams. Leverage prior Data Scientist experience to deeply understand model functionality, data flow, and training processes to ensure security controls are contextually relevant and effective. Mentor and educate engineering teams on emerging AI security threats and defense strategies.
Required Qualifications & Experience :
Total Years of Experience : 8 - 15 years in software architecture, security engineering, and / or data science. Relevant Experience in AI Architecture :Proven experience in designing, securing, and deploying enterprise-scale AI / ML systems, particularly in deep learning or large language model (LLM) contexts.Exposure working as Data Scientist : Strong foundational understanding of data science principles, model training, validation, feature engineering, and MLOps practices.Relevant experience in Security Testing : Extensive experience with penetration testing, vulnerability analysis, and securing cloud-native environments (AWS, Azure, GCP) used for AI workloads.Relevant experience in Red Teaming : Hands-on, demonstrable experience leading or significantly contributing to offensive security operations, with a strong preference for experience targeting AI / ML systems.Deep knowledge of AI-specific threat models and security frameworks (e.g., OWASP Top 10 for LLM Applications, MITRE ATLAS). Expertise in one or more major programming languages (Python preferred) and relevant security / AI libraries.(ref : hirist.tech)