We are looking for a Gen AI Researcher.
As a Red Team Specialist focused on Generative AI Models, you will play a critical role in enhancing the security and integrity of our cutting-edge AI technologies.
Your primary responsibility will be to conduct analysis and testing of our generative AI systems, including but not limited to language models, image generation models, and any related infrastructure.
The goal is to identify vulnerabilities, assess risks, and provide actionable insights to fortify our AI models and guardrails against potential threats.
Key Responsibilities:
Simulated Cyber Attacks: Conduct sophisticated and comprehensive simulated attacks on generative AI models and their operating environments to uncover vulnerabilities.
Vulnerability Assessment: Evaluate the security posture of AI models and infrastructure, identifying weaknesses and potential threats.
Risk Analysis: Perform thorough risk analysis to determine the impact of identified vulnerabilities and prioritize mitigation efforts.
Mitigation Strategies: Collaborate with development and security teams to develop effective strategies to mitigate identified risks and enhance model resilience.
Research and Innovation: Stay abreast of the latest trends and developments in AI security, ethical hacking, and cyber threats. Apply innovative testing methodologies to ensure cutting-edge security practices.
Documentation and Reporting: Maintain detailed documentation of all red team activities, findings, and recommendations. Prepare and present reports to senior management and relevant stakeholders.
Requirements: Proven record of AI vulnerabilities analysis
Strong understanding of AI technologies and their underlying architectures, especially generative models and frameworks.
At Least 5 years of experience in offensive cyber security, particularly in Cloud and API security.
Familiarity with agentic frameworks and agentic development experience
Proficiency in python.
Excellent analytical, problem-solving, and communication skills.
Ability to work in a fast-paced, ever-changing environment.
Nice-to-Have:
Bachelors or Masters degree in Computer Science, Information Security, or a related field.
Proving record of building production quality pipelines and automations
Experience with machine learning development frameworks and environments.
Advanced Certifications in offensive cybersecurity (e.g. OSWE, OSCE3, SEC542, SEC522) are highly desirable.
Certifications/background in DevOps/ML fields are highly desirable
This position is open to all candidates.