משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

משרות בלוח החם
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 9 שעות
דרושים בNOW
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This is a high?impact, hands-on role within a small, elite team responsible for building and deploying advanced ML-driven trading signals and data?powered insights used across the companys core products.
You will work closely with a global manager and collaborate with two additional ML engineers, contributing directly to the next generation of data, modeling, and signal?generation infrastructure.
This position is ideal for someone who combines strong ML engineering fundamentals with curiosity for market data, experimentation, and production?grade deployment.

Build and maintain complex SQL-based pipelines for modeling datasets
Design and execute large?scale experiments with proper cross?validation, leakage control, calibration, and reproducible backtesting
Develop ML models using state?of?the?art techniques and deploy them into production environments
Requirements:
Python + production experience:
Pandas/Polars, SQL/BigQuery, FastAPI, Docker, CI/CD
Modeling expertise:
XGBoost / PyTorch, calibration, class imbalance handling, walk?forward CV, leakage control
Experimentation at scale:
MLflow / DVC, reproducible backtests, GCP/Vertex jobs, orchestration frameworks
Capital markets:
Strong interest is required; understanding of equities/portfolio basics is important
(Algo?trading experience is an advantage but not mandatory)
Familiarity with quirks of financial data and portfolio construction (typically the longest to master)
Senior individual contributor
No people management
Full ownership across the ML lifecycle
Works closely with the ML/Quant team and global leadership
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8433141
סגור
שירות זה פתוח ללקוחות VIP בלבד
לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/10/2025
Location: Ramat Gan
Job Type: Full Time and Hybrid work
ActiveFence is seeking an experienced and detail-oriented AI Safety Team Lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data-driven analyses that strengthen AI model safety and integrity. You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration. Key Responsibilities Operational and Quality Leadership
* Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
* Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
* Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
* Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement. Methodology and Research Development
* Design and refine red teaming methodologies for new Responsible AI projects.
* Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
* Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications. Client Engagement and Collaboration
* Attend client meetings to address broader methodological or operational questions.
* Represent the red teaming function in cross-departmental collaboration with other ActiveFence teams.

About ActiveFence:
ActiveFence is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, ActiveFence empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, ActiveFence enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.

Hybrid:
Yes
Requirements:
Must Have Proven background in red teaming , AI safety research, or Responsible AI operations or content moderation
* Demonstrated experience managing complex projects or teams in a technical or analytical environment.
* Strong understanding of adversarial testing methods and model evaluation.
* Excellent communication skills in English, both written and verbal.
* Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
* Confidence in client-facing environments, including presenting deliverables and addressing high-level questions. Nice to Have
* Advanced academic or research background in AI, computational social science, or information integrity.
* Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
* Engagement in professional or academic communities related to Responsible AI, trust and safety, or machine learning security.
* Participation in industry or academic conferences.
* Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
* Proven ability to mentor researchers and foster professional development w
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8390985
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו