As an AI Security Research Intern in the Autonomous Attack Disruption team, you will join the frontlines of our Defender's mission to stop attacks in near real-time. Under the mentorship of experienced researchers, use AI to analyze real-world attacker TTPs and build systems that autonomously detect and disrupt attacks before adversaries reach their goals, including agentic pipelines and LLM-based threat analysis.
This role requires a blend of applied security research expertise, AI fundamentals, and engineering skills to deliver production-ready protection at a global scale. This is your chance to see your AI-powered research transformed into autonomous defense systems that protects millions of users.
Responsibilities
Investigate real-world advanced attacker TTPs and apply AI techniques (LLMs, agentic workflows) to support the development of high-fidelity, AI-augmented protection logic across complex cross-domain kill-chains.
Apply security expertise combined with AI-driven methods to analyze massive telemetry sets using big-data query languages (KQL) and AI-driven analysis, reasoning over data to identify novel malicious patterns and engineer evidence-based detection rules.
Contribute to the design and implementation of AI-powered capabilities that autonomously disrupt sophisticated threats in near real-time.
Assist in the refinement of protection coverage by analyzing real-world attack telemetry to improve the accuracy and performance of existing detection logics.
Contribute to a strategic feedback loop by documenting findings from attack data analysis to improve overall protection logic and system-wide security posture.
Partner with engineering, product, and other research teams to translate research insights into production-ready AI systems, helping to validate protection concepts, from prompt engineering to model evaluation, and ship them at a global scale.
Explore and prototype with emerging AI tools and frameworks to accelerate security research workflows and build reusable AI-driven research tooling.
Requirements: Required Qualifications
Must have at least 3 additional semesters before graduation - graduation date Summer 27 or later.
Available to work 3 days a week.
Proven hands-on experience in security research, threat hunting, or detection engineering roles (e.g., from specialized military service, previous internships, or a significant portfolio of independent research/investigation).
Proficiency in Python~~, C#,~~ or similar languages, with a focus on writing clean, functional, and scalable code.
Hands-on experience with AI technologies, whether through building ML models, working with LLMs and prompt engineering, experimenting with agentic frameworks, or applying AI to academic or personal projects - and a genuine passion for using AI to solve real-world problems.
Preferred Qualifications
Currently pursuing a Bachelor's or Masters Degree in Statistics, Mathematics, Computer Science, Data Science, AI/Machine Learning, or related field.
Deep understanding of the modern threat landscape, including hands-on familiarity with lateral movement techniques, credential theft, or cloud-native attack vectors.
Previous experience reasoning over large-scale datasets using big-data query languages (KQL/Kusto, SQL, or similar) to identify novel malicious patterns and drive evidence-based research decisions.
A proven "Hunter" mindset with a track record of identifying novel malicious patterns and converting them into actionable alerts.
Experience with LLMs, prompt engineering, or agentic AI frameworks (e.g., LangChain, Semantic Kernel, AutoGen) - academic projects or personal exploration count.
Interest in the intersection of AI and adversarial behavior - building autonomous, high-stakes decision systems for detection, analysis, and disruption.
This position is open to all candidates.