The Security Models Training team builds and operates the large‑scale AI training and adaptation engines that power Security products, turning cutting‑edge research into reliable, production‑ready capabilities.
As Lead Applied Scientist, you will own end‑to‑end model development for security scenarios, set technical strategy across multiple model efforts and teams, including developing new model architectures, continual pre‑training, task‑focused fine‑tuning, reinforcement learning, and objective, benchmark‑driven evaluation.
You will drive training efficiency and reliability on distributed GPU systems, deepen model reasoning and tool‑use capabilities, and embed Responsible AI, privacy, and compliance into every stage of the workflow. The role is hands‑on and impact‑focused, partnering closely with engineering and product to translate innovations into shipped, measurable outcomes, defining quality gates and readiness criteria across teams, and mentoring senior scientists and engineers to scale results across globally distributed teams.
You will combine strong coding, experimentation, and debugging skills with a systems mindset to accelerate iteration cycles, improve throughput and cost‑effectiveness, and help shape the next generation of secure, trustworthy AI for our customers.
Responsibilities:
Youll work as part of an Applied Science team on high-impact, technically ambitious AI projects that directly shape the future of AI in Cyber security, with ownership for taking advanced research through to production impact.
Technical Leadership & Ownership: set technical direction for major security domain initiatives and align roadmaps across multiple teams; lead security model programs spanning pre‑training, task tuning, reinforcement learning, and evaluation; translate cutting‑edge research into production‑ready capabilities. This role influences portfolio‑level technical tradeoffs, investment prioritization, and long‑term architecture decisions for security models.
Advanced Model Design - Building and customizing deep learning model architectures (e.g., modifying transformer blocks, attention/memory modules, etc.) at the SLM/LLM scale; making principled architectural tradeoffs to improve reliability, robustness, and security‑specific behavior.
Advanced Model Training - Apply deep expertise in pre-training, post-training, and reinforcement learning (RL) for both language and other modalities, including time-series.
Design & Evaluate Datasets - Build high-quality datasets and benchmarks; define objective evaluation frameworks and quality gates; run ablation studies to measure impact and optimize data and training effectiveness to support confident product decisions.
Develop Data Infrastructure - Create and maintain scalable pipelines for ingestion, preprocessing, filtering, and annotation of large, complex datasets, with attention to privacy, governance, and long‑term reuse across security scenarios.
Research & Innovation - Collaborate with cross-functional teams to push research and product boundaries, delivering models that make a real-world impact.
דרישות:
M.Sc. / Ph.D. in Computer Science, Information Systems, Electrical or Computer Engineering or Data Science (Ph.D. strongly preferred). Candidates with M.Sc. / Ph.D. in related fields with proven industry experience or a strong publication record in the areas of LLM, Information Retrieval, Machine Learning, Natural Language Processing, Time Series Forecasting and Deep Learning are considered as well.
Proven hands-on experience of at least 8 years (including post-grad work) in building and deploying Machine Learning products. Key areas of expertise include Natural Language Processing and Large Language Models, along with an understanding of concepts su המשרה מיועדת לנשים ולגברים כאחד.