our company Cloud Security is seeking a Senior Security Researcher - AI Security to join our highly technical product research team working at the core of our cloud security platform. This is a rare opportunity to define a new discipline. AI security is an emerging field with few established playbooks, and you will help write them. In this role, you will own the research direction for AI security across our company's platform, uncovering novel risks in AI-native systems and translating that knowledge into product capabilities and industry-leading research. You'll be surrounded by experienced researchers and engineers who live and breathe security, with the space and backing to do original work in a domain that is rapidly evolving.
We're looking for an exceptional security researcher who can navigate ambiguity, think like an attacker, and bring clarity to a space that lacks it. You're curious, technically deep, and energized by the challenge of defining risk in systems that are still being understood.
Your Role:
Be at the forefront of an emerging discipline. Conduct technical analysis of AI frameworks, services, and architectures to discover novel risks, vulnerabilities, and attack vectors before they become industry-wide problems .
Define AI security risk by analyzing how exposure is created and exploited in AI systems. Collaborate with engineering and product teams to translate AI research into product findings.
Evaluate the risk of pre-trained models, vector databases, and orchestration frameworks (e.g., LangChain, LlamaIndex) to define how shadow AI creates organizational exposure.
Author blogs, whitepapers, and technical advisories that set the industry narrative. Present original research at leading conferences and serve as our company's external voice on AI risk topics.
Analyze AI systems from an attacker's perspective to define trust boundaries, map attack techniques, and identify exploitable paths. Translate findings into product features and outbound research.
Investigate and analyze AI infrastructures and services to find 0-day vulnerabilities, security holes, weaknesses, and design flaws.
Requirements: 5+ years of experience in security research, vulnerability research, or offensive security.
Familiarity with OWASP Top 10 for Large Language Model Applications (prompt injection, data poisoning, system prompt leakage).
Ability to analyze complex systems from an attacker's perspective, identify weaknesses and exploit them.
Strong understanding of AI systems, frameworks, and deployment patterns, with proven ability to exploit them.
Proven track record of novel, complex security research in cloud security or application security, with published work (blogs, papers, conference presentations).
Highly motivated, curious, and comfortable navigating unknown territory.
Strong communication skills, written and verbal, with the ability to articulate novel risks and technical findings clearly.
And Ideally:
Experience discovering and disclosing vulnerabilities (CVEs, bug bounty, responsible disclosure).
Experience analyzing systems for data leakage or unintended information exposure.
Solid understanding of cloud platforms (AWS, Azure, GCP) and cloud security concepts.
Experience tracking the evolving AI ecosystem and translating new developments into security research.
This position is open to all candidates.