משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
כל מה שרציתם לדעת על מבחני המיון ולא העזתם לשאול
זומנתם למבחני מיון ואין לכם מושג לקראת מה אתם ה...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

משרות בלוח החם
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 9 שעות
דרושים בNOW
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This is a high?impact, hands-on role within a small, elite team responsible for building and deploying advanced ML-driven trading signals and data?powered insights used across the companys core products.
You will work closely with a global manager and collaborate with two additional ML engineers, contributing directly to the next generation of data, modeling, and signal?generation infrastructure.
This position is ideal for someone who combines strong ML engineering fundamentals with curiosity for market data, experimentation, and production?grade deployment.

Build and maintain complex SQL-based pipelines for modeling datasets
Design and execute large?scale experiments with proper cross?validation, leakage control, calibration, and reproducible backtesting
Develop ML models using state?of?the?art techniques and deploy them into production environments
Requirements:
Python + production experience:
Pandas/Polars, SQL/BigQuery, FastAPI, Docker, CI/CD
Modeling expertise:
XGBoost / PyTorch, calibration, class imbalance handling, walk?forward CV, leakage control
Experimentation at scale:
MLflow / DVC, reproducible backtests, GCP/Vertex jobs, orchestration frameworks
Capital markets:
Strong interest is required; understanding of equities/portfolio basics is important
(Algo?trading experience is an advantage but not mandatory)
Familiarity with quirks of financial data and portfolio construction (typically the longest to master)
Senior individual contributor
No people management
Full ownership across the ML lifecycle
Works closely with the ML/Quant team and global leadership
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8433141
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
משרה בלעדית
4 ימים
מיקום המשרה: פתח תקווה
סוג משרה: משרה מלאה
לחברה המתמחה בפיתוח ואספקה של מוצרים ויישומים עתירי ידע בתחומי ה- Computer Vision וה-AI בפתח תקוה.
דרוש/ה
מפתח/ת אלגוריתם ו-AI
העבודה כוללת מחקר ופיתוח אלגוריתמים מתקדמים בתחום ה-Computer Vision, המיפוי, GIS וה-Visual Generative A.I..
דרישות:
דרישות חובה:
רקע מתמטי/תיאורטי מוצק ידע מעמיק באלגוריתמים ושיטותDeep Learning
ניסיון של שנתיים לפחות בעיצוב, פיתוח ושינוי ארכיטקטורות של רשתות AI לראייה ממוחשבת
מעורבות ואחריות מקצה לקצה בתהליך הפיתוח: תכנון, פיתוח ו-deployment בחומרת קצה
ניסיון בעבודה עם מידע 3D בנושאים הבאים: סגמנטציה, פירוק, היתוך, שיפור, השלמה

יתרון משמעותי:
ניסיון בפיתוח ואינטגרציה ב++ C להטמעה על חומרות קצה שונות.

יתרון נוסף:
Generative 3D
נסיון עם אלגוריתמים קלאסיים של ראייה ממוחשבת בתחומי:
o SLAM
o Image Registration
o 3D Reconstruction המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8389365
סגור
שירות זה פתוח ללקוחות VIP בלבד
לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/12/2025
Location: Ramat Gan
Job Type: Full Time
What We Are Looking For As a Red Team Specialist focused on GenAI models, you will play a critical role in safeguarding the security and integrity of commercial cutting-edge AI technologies. Your primary responsibility will be to analyze and test commercial GenAI systems including, but not limited to, language models, image generation models, and related infrastructure. The objective is to identify vulnerabilities, assess risks, and deliver actionable insights that strengthen AI models and guardrails against potential threats. Key Responsibilities
* Execute sophisticated and comprehensive attacks on generative foundational models and agentic frameworks.
* Assess the security posture of AI models and infrastructure, identifying weaknesses and potential threats.
* Collaborate with security teams to design and implement effective risk mitigation strategies that enhance model resilience.
* Apply innovative testing methodologies to ensure state-of-the-art security practices.
* Document all red team activities, findings, and recommendations with precision and clarity.

About ActiveFence:
ActiveFence is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, ActiveFence empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, ActiveFence enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.

Hybrid:
Yes
Requirements:
Must-Have
* Strong understanding of AI architecture, frameworks and agentic applications.
* Hands on experience in AI vulnerability research.
* Minimum of 3 years of experience in offensive cybersecurity, with a focus on penetration testing.
* Exceptional analytical, problem-solving, and communication skills.
* Ability to thrive in a fast-paced, dynamic environment. Nice-to-Have
* Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field.
* Advanced certifications in offensive cybersecurity (e.g., OSWE, OSCE3, SEC542, SEC522).
* Proficiency in Python.
* Webint / OSINT experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8375262
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/11/2025
Location: Ramat Gan
Job Type: Full Time
About the position As a Red Team Specialist focused on Generative AI Models, you will play a critical role in enhancing the security and integrity of our cutting-edge AI technologies. Your primary responsibility will be to conduct analysis and testing of our generative AI systems, including but not limited to language models, image generation models, and any related infrastructure. Your objective is to help clients secure their AI models and frameworks by identifying weaknesses, assessing risks, and providing clear steps for improvement.
Key Responsibilities:
* Simulated Cyber Attacks: Conduct sophisticated and comprehensive simulated attacks on generative AI models and their operating environments to uncover vulnerabilities.
* Vulnerability Assessment: Evaluate the security posture of AI models and infrastructure, identifying weaknesses and potential threats.
* Risk Analysis: Perform thorough risk analysis to determine the impact of identified vulnerabilities and prioritize mitigation efforts.
* Mitigation Strategies: Collaborate with development and security teams to develop effective strategies to mitigate identified risks and enhance model resilience.
* Research and Innovation: Stay abreast of the latest trends and developments in AI security, ethical hacking, and cyber threats. Apply innovative testing methodologies to ensure cutting-edge security practices.
* Documentation and Reporting: Maintain detailed documentation of all red team activities, findings, and recommendations. Prepare and present reports to senior management and relevant stakeholders.
Requirements:
Must-Have
* Proven experience in AI vulnerabilities analysis
* Strong understanding of AI technologies and their underlying architectures, especially generative models and agentic frameworks.
* At Least 5 years of experience in Web Penetration testing.
* Excellent analytical, problem-solving, and communication skills.
* Ability to work in a fast-paced, ever-changing environment. Nice-to-Have
* Proficiency in Python or NodeJS
* Advanced Certifications in offensive cybersecurity (e.g. OSWE, OSCE3, SEC542, SEC522) are highly desirable.
* Familiarity with agentic frameworks and agentic development experience
* Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field.
* Proven records for vulnerability disclosure, such as CVE
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8412440
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Europe
Job Type: Full Time
We are seeking an experienced and detail-oriented Red Teaming team lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data -driven analyses that strengthen AI model safety and integrity. You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration. Key Responsibilities Operational and Quality Leadership
* Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
* Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
* Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
* Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement. Methodology and Research Development
* Design and refine red teaming methodologies for new Responsible AI projects.
* Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
* Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications. Client Engagement and Collaboration
* Attend client meetings to address broader methodological or operational questions.
* Represent the red teaming function in cross-departmental collaboration with other teams.
About us:
We are the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the worlds largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, we empower security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, we enable organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.
Hybrid:
Yes
Requirements:
Must Have Proven background in red teaming, AI safety research, or Responsible AI Operations.
* Demonstrated experience managing complex projects or teams in a technical or analytical environment.
* Strong understanding of adversarial testing methods and model evaluation.
* Excellent communication skills in English, both written and verbal.
* Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
* Confidence in client-facing environments, including presenting deliverables and addressing high-level questions. Nice to Have
* Advanced academic or research background in AI, computational social science, or information integrity.
* Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
* Engagement in professional or academic communities related to Responsible AI, trust and safety, or Machine Learning security.
* Participation in industry or academic conferences.
* Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8390988
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/10/2025
Location: Ramat Gan
Job Type: Full Time and Hybrid work
ActiveFence is seeking an experienced and detail-oriented AI Safety Team Lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data-driven analyses that strengthen AI model safety and integrity. You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration. Key Responsibilities Operational and Quality Leadership
* Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
* Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
* Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
* Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement. Methodology and Research Development
* Design and refine red teaming methodologies for new Responsible AI projects.
* Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
* Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications. Client Engagement and Collaboration
* Attend client meetings to address broader methodological or operational questions.
* Represent the red teaming function in cross-departmental collaboration with other ActiveFence teams.

About ActiveFence:
ActiveFence is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, ActiveFence empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, ActiveFence enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.

Hybrid:
Yes
Requirements:
Must Have Proven background in red teaming , AI safety research, or Responsible AI operations or content moderation
* Demonstrated experience managing complex projects or teams in a technical or analytical environment.
* Strong understanding of adversarial testing methods and model evaluation.
* Excellent communication skills in English, both written and verbal.
* Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
* Confidence in client-facing environments, including presenting deliverables and addressing high-level questions. Nice to Have
* Advanced academic or research background in AI, computational social science, or information integrity.
* Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
* Engagement in professional or academic communities related to Responsible AI, trust and safety, or machine learning security.
* Participation in industry or academic conferences.
* Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
* Proven ability to mentor researchers and foster professional development w
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8390985
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו