Role : AI Agent Security and Governance Engineer
Experience : 6–12 years
Location : Hyderabad
Work Mode : Hybrid (3 days / week in-office)
Domain : Healthcare / Life Sciences
Join Time : Immediate
Employment Type : Full-Time / Contract
Role Summary :
We are seeking an AI Agent Security & Governance Engineer with strong cybersecurity expertise and practical understanding of AI / ML systems. In this role, you will secure enterprise AI agents, LLM applications, ML models, and data pipelines used across Healthcare and Life Sciences workflows. You will help define governance, enforce secure-by-design principles, safeguard sensitive data, and ensure responsible, compliant, and safe AI operations.
Key Responsibilities :
- AI / LLM Security Engineering
- Secure AI / ML pipelines, LLM APIs, RAG systems, vector databases, and agentic AI workflows.
- Implement controls against prompt injection, adversarial ML attacks, data poisoning, model inversion, model theft , and harmful agent actions.
- Embed security-by-design into AI development and deployment lifecycles.
- Governance, Compliance & Risk Management
- Perform AI-focused threat modeling , bias risk assessment, and security posture evaluation. Develop processes for safe, explainable, auditable, and ethical AI usage
- Define AI governance controls for Healthcare / Life Sciences, ensuring compliance with HIPAA, GDPR, SOC2 , and internal policies.
- Security Operations & Monitoring
- Monitor AI agents and ML models for drift, anomalies, misuse, hallucinations, and adversarial behavior.Investigate and resolve incidents involving AI security breaches or misbehavior.
- Build automated pipelines for red-teaming, adversarial testing, and model robustness validation.
- Collaboration & Cross-Functional Enablement
- Work closely with data scientists, ML engineers, DevSecOps, product owners, and clinical domain teams.
- Develop AI security documentation, runbooks, and governance playbooks.
- Lead training sessions for engineering teams on AI / ML security best practices.
- Required Skills & Experience
- 6–12 years of experience across Cybersecurity, AI / ML Security, Application Security, or Cloud Security.
- Cybersecurity expert with proven hands-on experience implementing security protocols to safeguard AI systems, models, and data workflows.
- Deep understanding of cyber security frameworks, methodologies, and industry standards , including NIST, MITRE ATT&CK, OWASP, ISO27001.
- Experience with LangChain, LangGraph, Guardrails AI, Bedrock / Gemini / OpenAI integrations.
- Exposure to privacy technologies such as differential privacy, tokenization, and federated learning.
- Track record of staying current with new AI threats, emerging vulnerabilities, and evolving security best practices.
- Experience securing cloud environments ( AWS / GCP / Azure ) and containerized systems (Kubernetes, Docker).
- Strong technical knowledge of LLM architecture, embeddings, RAG mechanisms, vector stores, and agentic AI frameworks.
- Familiarity with adversarial ML methods : prompt injection, model inversion, membership inference, data poisoning.
- Proficiency in Python or similar languages for automation and testing.
- Understanding of Healthcare / Life Sciences security and compliance (HIPAA, PHI, data sensitivity).
📩 Apply Now!
Send your updated resume to careers@sidinformation.com