דרושים » AI » GenAI Security Specialist

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
What We Are Looking For As a Red Team Specialist focused on GenAI models, you will play a critical role in safeguarding the security and integrity of commercial cutting-edge AI technologies. Your primary responsibility will be to analyze and test commercial GenAI systems including, but not limited to, language models, image generation models, and related infrastructure. The objective is to identify vulnerabilities, assess risks, and deliver actionable insights that strengthen AI models and guardrails against potential threats. Key Responsibilities
* Execute sophisticated and comprehensive attacks on generative foundational models and agentic frameworks.
* Assess the security posture of AI models and infrastructure, identifying weaknesses and potential threats.
* Collaborate with security teams to design and implement effective risk mitigation strategies that enhance model resilience.
* Apply innovative testing methodologies to ensure state-of-the-art security practices.
* Document all red team activities, findings, and recommendations with precision and clarity.

About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.

Hybrid:
Yes
Requirements:
Must-Have
* Strong understanding of AI architecture, frameworks and agentic applications.
* Hands on experience in AI vulnerability research.
* Minimum of 3 years of experience in offensive cybersecurity, with a focus on penetration testing.
* Exceptional analytical, problem-solving, and communication skills.
* Ability to thrive in a fast-paced, dynamic environment. Nice-to-Have
* Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field.
* Advanced certifications in offensive cybersecurity (e.g., OSWE, OSCE3, SEC542, SEC522).
* Proficiency in Python.
* Webint / OSINT experience.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8375262
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Ramat Gan
Job Type: Full Time and Hybrid work
As a GenAI Security Researcher, you’ll dive deep into the challenges of AI safety, conducting redteaming operations to identify vulnerabilities in generative AI systems and their infrastructure. You will conduct redteaming operations for finding and addressing risks to ensure AI models are robust, secure, and future-proof. As a Security Researcher, you will:
* Conduct sophisticated black-box redteaming operations to uncover vulnerabilities in generative AI models and infrastructure.
* Design new techniques to bypass the latest AI security mechanisms.
* Evaluate and strengthen the security of AI systems, identifying weaknesses and collaborating to implement improvements.
* Work with cross-functional teams to automate security testing processes and establish best practices.
* Stay ahead of emerging trends in AI security, ethical hacking, and cyber threats to ensure we’re at the cutting edge.

About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.



Hybrid:
Yes
Requirements:
Must Have
* 3+ years in offensive cybersecurity, especially focused on web applications and API security OR Advanced Ph.D. Candidates with a proven record of research in AI/Cybersecurity
* Strong programming and scripting skills (e.g., Python, JavaScript) relevant to AI security.
* In-depth understanding of AI technologies, particularly generative models like GPT, DALL-E, etc.
* Solid knowledge of AI vulnerabilities and mitigation strategies.
* Excellent problem-solving, analytical, and communication skills. Preferred Skills That Set You Apart:
* Certifications in offensive cybersecurity (e.g., OSWA, OSWE, OSCE3, SEC542, SEC522) OR Master's degree and above in Computer Science with a focus on Data Science or AI.
* Experience in end-to-end product development, including infrastructure and system design.
* Proficiency in cloud development.
* Familiarity with AI security frameworks, compliance standards, and ethical guidelines.
* Ability to thrive in a fast-paced, rapidly evolving environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8375232
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
What We Are Looking For As a GenAI Security Team Lead, you will lead a team of GenAI researchers, combining both strategic and hands-on, big-picture planning with direct technical contributions. Your primary responsibility will be managing a GenAI-focused team, while fostering a culture of innovation and excellence, providing comprehensive training, guidance, and leadership while ensuring high-quality deliverables. The position balances managerial responsibilities with hands-on technical work (approximately 25%) Key Responsibilities
* Recruit, mentor, and manage a team of GenAI researchers
* Regularly share knowledge with team members and collaborate across departments
* Evaluate and enhance team performance and deliverables quality
* Lead development of automated test harness, continuously evaluating prompts across multiple GenAI models to expose vulnerabilities and measure guardrail effectiveness.



About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must-Have
* 3+ years of experience managing teams in offensive cyber security
* Strong understanding of GenAI or AI-driven products. and their underlying architectures, especially generative models and agentic frameworks.
* Proficiency in Python
* Exceptional leadership and interpersonal skills, with excellent communication and stakeholder management abilities.
* Strategic, analytical mindset with strong problem-solving skills and a proven ability to drive complex projects to completion Nice-to-Have
* Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field.
* Experience with AWS cloud services and environments.
* Experience with machine learning development frameworks and environments.
* Advanced Certifications in offensive cybersecurity (e.g. OSWE, OSCE3, SEC542, SEC522) are highly desirable.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8375256
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/02/2026
Location: Ramat Gan
Job Type: Full Time
As a GenAI Security Researcher, you’ll be deep diving into AI security challenges by running red teaming operations to find weaknesses in generative AI systems and their setup. Pioneering novel bypass techniques to test the latest AI security defenses. Identifying system flaws, driving remediation efforts, and fortifying overall AI security to help building robust, secure, and future-ready models. Partnering with teams to automate security testing and define enterprise best practices. Staying ahead of the curve in AI security, ethical hack.

About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must Have
* 2+ years in offensive cybersecurity (especially web apps and API security) OR a B.SC/M.SC. with solid AI/Cybersecurity research under your belt.
* Coding/scripting skills (like Python, JavaScript) relevant to AI security.
* A deep understanding of AI tech, especially generative models (think GPT, DALL-E, Codex, etc.).
* Solid knowledge of how AI works internally.
* Awesome problem-solving, analysis, and communication skills. Nice to Have
* Offensive cybersecurity certs (OSWA, OSWE, OSCE3, SEC542, SEC522) OR a Master's or higher in Computer Science with a focus on Data Science or AI.
* Experience building products end-to-end, including the infrastructure and system design.
* Know-how in cloud development.
* Familiarity with AI security frameworks, compliance rules, and ethical guidelines.
* Being great at handling a super fast-paced, changing environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8555533
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 16 שעות
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
We are seeking an experienced and detail-oriented GenAI team lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data -driven analyses that strengthen AI model safety and integrity. You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration. Key Responsibilities Operational and Quality Leadership
* Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
* Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
* Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
* Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement. Methodology and Research Development
* Design and refine red teaming methodologies for new Responsible AI projects.
* Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
* Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications. Client Engagement and Collaboration
* Attend client meetings to address broader methodological or operational questions.
* Represent the red teaming function in cross-departmental collaboration with other ActiveFence teams.
 
About us:
We are a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact-whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must Have:
Proven background in red teaming and T&S, AI safety research, or Responsible AI Operations.
* Demonstrated experience managing complex projects or teams in a technical or analytical environment.
* Strong understanding of adversarial testing methods and model evaluation.
* Excellent communication skills in English, both written and verbal.
* Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
* Confidence in client-facing environments, including presenting deliverables and addressing high-level questions. Nice to Have
* Advanced academic or research background in AI, computational social science, or information integrity.
* Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
* Engagement in professional or academic communities related to Responsible AI, trust and safety, or Machine Learning security.
* Participation in industry or academic conferences.
* Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
* Proven ability to mentor researchers and foster professional development within technical teams.
* A proactive, research-driven mindset and a passion for ensuring safe, transparent, and ethical AI deployment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8552929
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
We are seeking a driven, detail-focused professional to become a vital part of our team as a Generative AI Analyst. In this role, you'll dive into the cutting-edge of technology, meticulously analyzing various content infringements to secure the new wave of Generative AI tools. Your duties will include collaborating with experts in diverse fields such as Hate Speech, Misinformation, Intellectual Property and Copyright, Child Safety, among others. Your tasks will involve writing adversarial; prompts to identify weaknesses in various AI models, including Large Language Models (LLMs), Text-to-Image, Text-to-Video, and beyond. You'll also oversee data management to guarantee the highest quality of outputs.
Responsibilities
* Developing adversarial and risky prompt strategies across several areas of abuse to expose potential vulnerabilities in models.
* Managing projects end-to-end, from initial planning and oversight through Quality Assurance to final delivery.
* Handling extensive datasets across multiple languages and areas of abuse, ensuring precision and meticulous attention to detail.
* Ongoing investigation into new tactics for circumventing foundational models' safety measures.
* Working alongside diverse teams, engineering, product, policy, to tackle new challenges and craft forward-thinking strategies and resolutions.
* Promoting a culture of knowledge exchange and continual learning within the team.

About us:
A trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact-whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, we provide end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must have:
* Background in AI Safety and/or Responsible AI and/or AI Ethics
* Familiarity with recent Generative AI models and agents is essential, though direct technical experience is not a prerequisite.
* Command of English at a near-native level.
* Attention to detail, organizational capabilities, and the capacity to juggle numerous tasks concurrently. Additional Wants:
* Experience with Webint / OSINT.
* Experience with various model types (Text-to-Text, Text-to-Image) is desirable.
* A self-starter attitude, with the energy to excel in a fast-moving and variable environment.
* Ability to work independently and in a team environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8566131
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/02/2026
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
We are seeking a highly motivated and technically proficient Security Researcher to join our security research division. This role is dedicated to performing advanced offensive security assessments against the biggest companies in the world You need to be independent, attentive to details, organized, eager to learn new things, and like to research and solve problems What you’ll do:
* Engage in sophisticated Red Team projects, including the identification of undisclosed API endpoints and development of novel bypass techniques for established security controls
* Lead and execute comprehensive, technically rigorous security research targeting complex web and mobile applications, including reverse engineering and proprietary protocols investigation

About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must have:
* Minimum of 3 years of proven, hands-on experience in application security analysis and Web penetration testing
* Strong experience with reverse engineering and dynamic analysis of Android and iOS applications, including hands-on experience with techniques like detours, hooking, and runtime code manipulation
* Proficiency in developing and automating tasks using at least one language like Python, JavaScript, or GoLang.
* Deep, hands-on knowledge of the latest tactics, techniques, and procedures (TTPs) used in advanced penetration testing and network analysis.
* Ability to author comprehensive and technically rigorous reports detailing identified vulnerabilities and research outcomes. Nice to have:
* OSCP, OSWE, eWPTXv2, CRTP, or other high-level offensive certifications.
* Hands-on experience with industry-standard reversing tools like JADX, Ghidra, or IDA Pro.
* Demonstrated online achievements, write-ups, or contributions on platforms such as HackTheBox, Pwn2Own, TryHackMe, Bug Bounty programs, or published security research.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8536694
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
Location: Ramat Gan
Job Type: Full Time
Alice is seeking an experienced Malware Research Director to build and manage multiple teams dedicated to malware research. This role presents an exciting opportunity to establish a new operation from the ground up, creating processes, optimizing and setting up cross-team collaboration while serving as the primary client interface. The position is primarily leadership, client-facing, creating solutions and requiring exceptional team-building and operational setup skills. The ideal candidate demonstrates high technical skills, proven experience in building teams from scratch, establishing new operations, and strong client relationship management capabilities. Key Responsibilities:
* Establish operational processes, workflows, and quality standards for the new teams
* Coordinate with other departments to integrate the new operation into the existing infrastructure
* Serve as primary client interface, managing relationships and ensuring client satisfaction
* Present research findings and malicious evidence to clients and stakeholders
* Advise on technical aspects for malware research challenges and automated solutions
* Create training programs and onboarding processes for new team members
* Develop performance metrics and evaluation frameworks for team effectiveness
* Lead client meetings, requirement discussions, and project planning sessions
* Collaborate with sales and business development teams on client engagements

About Alice:
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.



Hybrid:
No
Requirements:
Must-Have:
* Management experience - managing at least 10 employees for a minimum of 2 years, with extensive experience in recruiting and building teams
* Strong client-facing experience with excellent presentation and communication skills
* At least 3 years of proven experience in one of the following: malware research, reverse engineering, penetration testing, embedded software development
* Understanding of malware research principles and the cybersecurity landscape
* Experience managing client relationships and delivering technical solutions to business stakeholders
* Must have a valid international government-issued photo ID (e.g., current passport, or international driver's license) for identity verification and global client interaction.
* Excellent spoken and written English.
Nice-to-Have:
* Android malware research / reverse engineering hands on experience from the last 3 years
* Experience in leading multiple teams comprising of a few dozen employees
* Experience in leading cybersecurity researchers or other research operations
* Experience establishing new departments or research operations within organizations
* Background in technical sales or business development in cybersecurity
* Experience presenting to C-level executives and technical stakeholders
* Experience with decompilers, debuggers, and disassemblers (e.g., JADX, JEB, LLDB, GDB, x86dbg, Ghidra, IDA Pro)
* Familiarity with instrumentation frameworks like Frida or Xposed
* Proficiency with HTTP debuggers, MITM tools, and network analyzers (e.g., Fiddler, HTTP Toolkit, Burp Suite, Wireshark, Little Snitch, mitmproxy)
* Understanding of network communications and protocols
* Familiarity with multiple programming languages (Java, C/C++, JavaScript, Python)
* Familiarity with multiplatform development frameworks such as Unity, Flutter and React Native
* Under
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8375228
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
26/01/2026
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
we are seeking an experienced Researcher to join our team dedicated to Android malware research. This role presents an exciting opportunity to conduct comprehensive malware analysis, detect emerging threats, and contribute to our understanding of the Android threat landscape. The ideal candidate demonstrates a positive, proactive attitude and excels as a reliable team player with strong technical skills in Android malware analysis. Key Responsibilities:
* Conduct in-depth Android malware research and reverse engineering
* Detect and document emerging Android malware trends and attack vectors
* Analyze Android applications for malicious behavior
* Write detection rules and develop automation processes for Android malware identification
* Collaborate with team members and share knowledge across departments
* Perform static and dynamic analysis of Android malware samples
* Document findings and contribute to threat intelligence reports
About us:
we are a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact-whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, our company provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
Requirements:
Must-Have:
* At least 3 years of proven experience in one of the following:
* Research in Android /Windows/Mac/ Linux
* Low-level reverse engineering or development
* Proficiency in one or more programming languages: JAVA, Python, JavaScript, or C / C ++
* Experience with reverse engineering tools and decompilers (e.g. JADX, JEB, IDA Pro, Ghidra)
* Familiarity with instrumentation tools like Frida or debugging tools such as GDB/LLDB
* Understanding of networking fundamentals and protocols
* Must have a valid international government-issued photo ID (e.g., current passport, or international driver's license) for identity verification and global client interaction Nice-to-Have:
* Experience with Android development
* Familiarity with Android frameworks, including Flutter, React Native, or Unity
* Understanding of Android application security
* Familiarity with the Android malware ecosystem and techniques
* Experience in building automated tools
* Experience with writing detailed reports in English
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8516812
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/02/2026
חברה חסויה
Location: Ramat Gan
Job Type: Full Time and Hybrid work
We are seeking a dynamic and experienced Threat Hunter to lead proactive cybersecurity efforts by uncovering hidden threats across our environment. In this role, you will drive hypothesis-based hunting, perform deep analysis and validation of security telemetry, investigate suspicious network activity, and continuously improve threat detection and response. You will also assess CVE relevance and exploitability to prioritize real-world risk, and leverage threat intelligence feeds and enrichment pipelines to enhance hunting context, detection accuracy, and response effectiveness.
If you thrive in a fast-paced environment and are excited about pushing the boundaries of cybersecurity, we want to hear from you.
Responsibilities:
Apply data analytics to analyze security-related network data, uncover actionable threat intelligence, detect anomalies and malicious behavior, and automate findings into an enhanced detection system.
Leverage current cybersecurity knowledge to interpret and contextualize findings, enabling informed decision-making and proactive measures to strengthen overall cybersecurity defenses.
Work closely with Product and Engineering to translate threat intelligence into product strategy, prioritized features, and defensive enhancements.
Monitor and analyze the latest vulnerabilities, CVEs, exploits, and threat actor TTPs, with a focus on techniques relevant to microsegmentation, identity security, lateral movement, and internal reconnaissance.
Integrate external threat feeds and intelligence sources into our product - including normalization, enrichment, classification, and validation of feed relevance.
Contribute to detection logic, threat models, and internal tooling that turn intelligence into prevention and protection.
Provide on-the-fly support during customer incident response events and penetration testing exercises by leveraging expertise to promptly detect and block security threats.
Requirements:
2 Years of experience with threat hunting, or incident response, including analyzing data and extracting insights from it.
Knowledge of protocols, networking and computers communication - must.
Understanding of cybersecurity concepts, including common threats, vulnerabilities, attack vectors, and basic defensive measures -must.
Strong understanding of attacker behaviors and common internal network compromise TTPs.
Ability to quickly assess CVE relevance/exploitability and leverage threat intelligence feeds, enrichment pipelines, and classification systems to gauge real-world risk.
Familiarity with scripting languages (Python) and data analysis frameworks (Pandas, Jupyter).
High level of analytical and problem-solving skills with strong attention to details.
Reliability in execution of complicated and long tasks, Independent and self-learning skills.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541605
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
08/02/2026
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
Were seeking a visionary Team Lead to lead a AI & backend-focused team building our next-generation Generative AI Platform. This platform will empower cross-functional teams across to design and deploy AI-driven solutions, leveraging cutting-edge advancements in generative AI, cloud-native architectures, and rapid innovation cycles.
In this role, youll manage a high-performing backend team responsible for building scalable microservices and cloud-based infrastructure, while integrating advanced AI capabilities such as evaluation frameworks, RAG pipelines, knowledge base management, embeddings, LLMs, chatbot assistants, and agent orchestration.
What Youll Do:
Lead and mentor a backend engineering team developing a scalable, cloud-native AI platform. Youll foster a culture of technical excellence, collaboration, and ownership while ensuring delivery of robust, production-grade systems.
Define the technical direction and architecture for backend services, focusing on microservices, distributed systems, and AWS-based infrastructure to support AI-driven applications.
Integrate cutting-edge AI capabilities into the platform, including RAG pipelines, embeddings, LLM orchestration, and evaluation frameworks, ensuring performance, scalability, and security.
Collaborate with product and engineering teams to translate business needs into backend solutions that enable AI-agent use cases and accelerate innovation across the organization.
Drive agile processes and best practices, leveraging tools like Jira to manage sprints, track progress, and continuously improve team efficiency and delivery quality.
Scale the team strategically, hiring and onboarding top talent while mentoring future leaders to support the growth of the Gen AI Platform group.
Requirements:
7+ years of experience in software R&D, including 2+ years in a leadership role managing backend engineering teams.
Strong backend development expertise, including microservices architecture, distributed systems, and cloud-native design on AWS.
Proficiency in Python for building scalable backend services and integrating AI components.
Hands-on experience with Generative AI technologies, including LLMs, RAG pipelines, embeddings, and agent frameworks.
Familiarity with AI development ecosystems, such as AWS Bedrock, LangChain, LangGraph, OpenAI APIs, and vector databases.
Experience with agile methodologies and tools (e.g., Jira) to manage team workflows and deliver high-quality software in fast-paced environments.
Advantages:
Familiarity with Machine Learning or Data Science concepts, enabling better collaboration with AI-focused teams.
Prior experience in AI/ML infrastructure, MLOps, or agent orchestration.
Exposure to data engineering, security, or AI ethics considerations in product development
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8536595
סגור
שירות זה פתוח ללקוחות VIP בלבד