Were seeking an AI Innovation Security Researcher to serve as the critical link between our AI development team and our security experts. In this role, you will:
Translate real-world security challenges into AI-driven solutions
Shape prompt strategies and model workflows for security use-cases
Contribute to AI system developmenthelp architect, prototype, and iterate on models and pipelines
Design and execute rigorous benchmarks to evaluate the performance of security-focused AI tools
Your work will power capabilities such as automated exploitability checks for SAST/SCA findings, AI-guided remediation of container vulnerabilities (e.g. Dockerfile misconfigurations, unsafe downloads), and detection/analysis of data leaks. Youll also help amplify our thought leadership by authoring blogs and delivering conference talks on cutting-edge AI-security topics.
Key Responsibilities
Research & Benchmarking
Define evaluation frameworks for AI models tackling security tasks
Build test suites for exploitability analysis (e.g. proof-of-concept generation, severity scoring)
Measure and report on model accuracy, false-positive/negative rates, and robustness
AI Collaboration & Development
Work with ML engineers to craft and refine prompt templates for security scenarios
Contribute to model architecture design, fine-tuning, and deployment workflows
Investigate model behaviors, iterate on training data, and integrate new AI architectures as needed
Security Expertise & Tooling
Apply deep knowledge of static and software composition analysis (SAST/SCA)
Analyze container build pipelines to identify vulnerability origins and remediation paths
Leverage vulnerability databases (CVE, NVD), threat modeling, and risk assessment techniques
Content Creation & Evangelism
Write technical blog posts, whitepapers, and documentation on AI-driven security solutions
Present findings at internal brown-bags and external conferences
Mentor teammates on AI security best practices
Requirements: Bachelors or Masters degree in Computer Science, Cybersecurity, AI/ML, or related field
3+ years in security research, application security engineering
Hands-on with LLMs, prompt engineering
Proficient in Python
Deep understanding of SAST/SCA tools (e.g. SonarQube, Snyk) and their outputs
Familiarity with container security tooling (Docker, Kubernetes, Trivy)
Strong data analysis skills for evaluating model outputs and security telemetry
Excellent written and verbal communication; ability to distill complex topics for diverse audiences
Collaborative mindset; experience working across research, engineering, and security teams
This position is open to all candidates.