As our Senior Staff Product Security AI Lead Researcher, you will spearhead research and development efforts focused on advanced guardrails for securing our Generative AI (GenAI) and agentic AI systems. You will lead AI and data science initiatives to pioneer methods for assessing and improving the security and safety of our AI models. Collaborating closely with the engineering team, specialised security researchers, and other AI specialists, you will translate cutting-edge research into actionable security solutions that significantly enhance the integrity of our AI offerings.
What Youll Do:
Lead innovative AI security and safety research, particularly around developing and refining guardrails for GenAI and agentic AI.
Design, implement, and oversee comprehensive testing protocols and evaluations to ensure AI model security and robustness.
Work collaboratively with engineering teams and specialized security researchers to integrate security best practices into the AI development lifecycle.
Provide expert insights on emerging threats and vulnerabilities unique to AI systems and propose effective mitigation strategies.
Communicate research outcomes clearly, influencing technical and strategic decisions throughout the organization.
Actively contribute to thought leadership in the AI security domain, participating in internal and external presentations, research publications, and standards development.
Requirements: To be successful in this role, you have:
8+ years of experience in security research, AI/ML research, data science, or related fields, specifically with experience securing AI systems.
Advanced expertise in machine learning frameworks (e.g., TensorFlow, PyTorch) and NLP libraries (e.g., Hugging Face, spaCy).
Demonstrated success in identifying and mitigating security vulnerabilities in AI systems, particularly involving large language models (LLMs) and agentic architectures.
Proven experience developing and validating robust AI security evaluation frameworks and benchmarks.
Strong proficiency in Python, with significant experience in data-driven security assessments.
Excellent communication and collaboration skills, able to clearly articulate complex findings to technical teams, leadership, and external partners
Preferred Qualifications:
Previous experience leading research teams or cross-functional projects in corporate or academic environments.
Published research or contributions to standards in AI security, safety, or responsible AI practices.
Familiarity with OWASP LLM Top 10 and experience addressing vulnerabilities such as prompt injection, data poisoning, and adversarial attacks.
Advanced degree (PhD or MS) in Computer Science, Data Science, Cybersecurity, or a related field.
This position is open to all candidates.