As a VP of AI Security Research, you will lead, scale, and inspire multi-disciplinary teams focused on red teaming, sandboxing, adversarial testing, developing red-teaming engines, and guardrail protections for GenAI models and agentic AI systems.
In this high-profile role, you will work closely with security research, product, engineering, and compliance teams to shape and deliver the next generation of AI cybersecurity solutions, while combining strategic vision with execution to effectively safeguard LLMs and agentic AI deployments at scale. Responsibilities:
Building and scaling global AI security teams, while providing mentorship to senior managers and fostering an innovation-driven security culture.
Overseeing advanced adversarial evaluations for GenAI models, agentic AI, and multi-agent (A2A) systems, and delivering executive-level threat intelligence and risk assessments to inform corporate AI strategy.
Defining and implementing AI red-teaming frameworks
Partnering with product and engineering to design and deploy enterprise-ready AI guardrails, including policy enforcement layers, monitoring pipelines, and anomaly detection systems and championing secure deployment practices for GenAI.
Requirements: Requirements:
12+ years of relevant industry experience in cybersecurity, ML security, or related fields
Extensive leadership experience managing and scaling security or R&D organizations, with a strong track record of building high-performance teams and driving complex projects to completion.
Deep expertise in cybersecurity and AI proven understanding of AI threats, adversarial Machine Learning, LLM vulnerabilities, and AI safety frameworks (OWASP Top 10 for LLMs, NIST AI Risk Management Framework, etc.).
Strategic mindset and execution skills, with the ability to set vision and direction for AI security initiatives and also dive into technical details when needed.
Demonstrated thought leadership in AI security by publishing research, speaking at industry events or contributing to AI security standards and open-source projects.
Experience building or deploying AI security products and tools, such as red teaming automation platforms, guardrail frameworks, or AI monitoring and anomaly detection systems.
Hands-on familiarity with agentic AI frameworks and protocols and cloud-based AI environments, showing you understand how to secure complex AI orchestration workflow
A background in AI or adversarial ML research, with insight into emerging threats and mitigation techniques for GenAI applications.
This position is open to all candidates.