רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

חוקר בינה מלאכותית

מסמך
מילות מפתח בקורות חיים
סימן שאלה
שאלות הכנה לראיון עבודה
עדכון משתמש
מבחני קבלה לתפקיד
שרת
שכר
משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/10/2025
Location: Ramat Gan
Job Type: Full Time and Hybrid work
ActiveFence is seeking an experienced and detail-oriented AI Safety Team Lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data-driven analyses that strengthen AI model safety and integrity. You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration. Key Responsibilities Operational and Quality Leadership
* Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
* Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
* Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
* Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement. Methodology and Research Development
* Design and refine red teaming methodologies for new Responsible AI projects.
* Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
* Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications. Client Engagement and Collaboration
* Attend client meetings to address broader methodological or operational questions.
* Represent the red teaming function in cross-departmental collaboration with other ActiveFence teams.

About ActiveFence:
ActiveFence is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, ActiveFence empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, ActiveFence enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.

Hybrid:
Yes
Requirements:
Must Have Proven background in red teaming , AI safety research, or Responsible AI operations or content moderation
* Demonstrated experience managing complex projects or teams in a technical or analytical environment.
* Strong understanding of adversarial testing methods and model evaluation.
* Excellent communication skills in English, both written and verbal.
* Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
* Confidence in client-facing environments, including presenting deliverables and addressing high-level questions. Nice to Have
* Advanced academic or research background in AI, computational social science, or information integrity.
* Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
* Engagement in professional or academic communities related to Responsible AI, trust and safety, or machine learning security.
* Participation in industry or academic conferences.
* Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
* Proven ability to mentor researchers and foster professional development w
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8390985
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו