Job Title : AI Red Team Engineer
Location : Remote
Duration : 2 Months Contract
Role Overview :
As an AI Red Team Engineer , you'll lead offensive security testing of AI agents, including LLMs that can access connectors (e.g., GDrive, Gmail). Your focus will be to uncover vulnerabilities, prompt-injection pathways, and data-exfiltration risks before adversaries do.
Responsibilities :
- Design and automate multi-turn attacks involving browser, terminal, and API misuse.
- Create prompt-injection and data-exfiltration attack scenarios.
- Script repeatable tests in Python or bash inside VMs.
- Verify policy compliance (e.g., PD5, FA2) and attempt policy bypasses.
- Write clear vulnerability reports (CVE, bug bounty).
Requirements :
2+ years of offensive security or adversarial ML experience.Proficiency in AppSec techniques (XSS, CSRF, SSRF) and LLM issues.Experience with browser automation, terminal commands, and API attacks.Proficient in Python / bash scripting inside VMs.Strong track record in vulnerability reporting (CVE, HackerOne ).Knowledge of privacy & financial-risk policies (GDPR, SOC2).