דרושים » ניהול ביניים » Senior ML Engineer - Applied AI Engineering Group

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 3 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior ML Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the ML platform that turns research into reliable, production-grade intelligence. You care about reproducibility, low-friction experimentation, and infrastructure that earns the trust of the scientists and researchers who depend on it daily. You'll architect and ship our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - turning models into production capabilities across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments.
If you want to make a meaningful impact, join our mission and build the ML platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Build and operate ML training infrastructure - distributed training pipelines, compute scheduling, and reproducible experiment workflows that data scientists rely on daily.
Own model serving and inference systems - packaging, deployment, autoscaling, A/B testing, canary rollouts, and latency/cost optimization for production models.
Run feature stores, model registries, and dataset versioning - enabling self-serve feature engineering, model lineage, and reproducible experiments across teams.
Build experiment tracking and evaluation infrastructure - automated evals, comparison dashboards, drift detection, and monitoring that give teams visibility into model behavior and performance.
Build and maintain production pipelines for training, fine-tuning workflows, and serving domain models - owning reliability, reproducibility, and scale.
Build and maintain the monitoring and observability layer - model performance tracking, data and prediction drift detection, data quality validation, and alerting.
Improve performance and cost across the ML stack - training throughput, inference latency, batch vs. real-time tradeoffs, and compute cost management.
Ship shared tooling - libraries, templates, CI/CD for models, IaC, and runbooks - while collaborating across Data Platform, AI, Data Science, Engineering, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
5+ years in software engineering, with 2+ years focused on ML infrastructure, MLOps, or data-intensive systems
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603632
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 3 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Required ML Engineering Team Lead - Applied AI Engineering Group
The Dream Job
It starts with you - a technical leader driven to build both the ML platform and the engineering team behind it. You care about reliable infrastructure, great developer experience, and growing engineers through real ownership. You'll set the technical direction for our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - shaping how models reach production across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments. You stay close enough to the codebase to debug production issues, unblock your engineers, and make sound architecture calls.
If you want to make a meaningful impact, join our mission and lead the team that builds the ML platform driving Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Set technical direction for the ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - through RFCs, prototypes, design reviews, and build-vs-buy decisions
Lead and grow a team of ML Engineers - hire, mentor, pair on hard problems, and raise the bar through code and design reviews
Contribute to critical systems, debug production issues, and maintain deep context on the codebase to inform technical decisions
Own operational excellence for model serving - set and enforce SLAs, run capacity planning, and keep compute costs predictable
Establish ML engineering standards - reproducible experiments, automated evals, model packaging, CI/CD for models, and observability
Support the full lifecycle of our models - from training on domain-specific data to low-latency inference powering production systems
Work closely with Data Platform, AI, Data Science, and Product teams - translate business priorities into engineering work and manage cross-team dependencies
Measure and improve developer experience - deploy friction, onboarding time, CI turnaround - as seriously as model performance.
Requirements:
6+ years in software engineering, ML engineering, or platform engineering, with hands-on experience building and operating ML infrastructure at scale.
2+ years leading an engineering team - hiring, mentoring, conducting design reviews, and shipping alongside your team
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603603
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 3 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior AI Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the agentic AI platform that turns LLMs into reliable, production-grade capabilities. You care about clean APIs, well-defined service boundaries, and systems that teams can build on with confidence. Dream is AI-first across the board - every team builds and operates agents. You'll architect and ship the platform that makes this possible: agent orchestration frameworks, LLM gateways, evaluation pipelines, tool-calling infrastructure, and retrieval systems. Without this platform, agents don't ship - you own the layer that turns AI research into Sovereign AI products, deployed across cloud and on-prem environments.
If you want to make a meaningful impact, join our mission and build the agentic AI platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Design and build agentic systems - single and multi-agent workflows with planning, memory, context engineering, and tool use - for both internal automation and product-facing autonomous capabilities operating over long time horizons.
Build and operate the AI platform layer - LLM gateways, prompt management, structured output handling, tool-calling infrastructure, and cost/latency optimization - deployed on Kubernetes, consumed by every team for their agentic work.
Own the agent framework layer - orchestration primitives, execution environments, state management, and sandboxed tool execution - giving every team the building blocks to create and operate their own agents.
Build evaluation infrastructure that gives teams confidence in agent behavior - automated LLM and agent evals for quality, correctness, safety, latency, cost, and regressions, including human-in-the-loop oversight for mission-critical workflows.
Productionize and harden backend services (APIs, gRPC, async workers) that integrate LLMs - with proper error handling, retries, circuit breakers, and high-availability patterns.
Own RAG pipelines and retrieval systems - indexing, chunking, embedding, vector database management, filtering, and relevance tuning for production retrieval.
Optimize performance and cost across the AI stack - model routing, caching, batching, and inference cost management.
Ship shared tooling - libraries, SDKs, agent templates, and documentation - while working closely with ML Platform, Data Platform, DevOps, and other teams across the Applied AI Engineering group. Own architecture, documentation, and operations end-to-end.
דרישות:
5+ years in backend or distributed systems engineering, with 2+ years focused on production systems that integrate AI/ML models or LLMs.
Engineering craft - Strong Python, Go, or Java, system architecture, API design, testing, and secure coding practices.
Agentic systems - Experience designing and building agent orchestration, tool-use systems, and autonomous workflows; familiarity with frameworks like LangGraph or similar, or having built equivalent from scratch
Backend engineering - Experience building production APIs and services (FastAPI or similar); async programming, service architecture, high-availability, and reliability patterns (retries, circuit breakers, backpressure)
LLM integration - Hands-on experience integrating LLMs via SDKs and APIs; context engineering, structured outputs, tool calling, and model routing
RAG & retrieval - Experience with embedding pipelines, vector databases (e.g., Milvus, Qdrant, Pinecone), chunking strategies, and relevance tuning
Evaluation & observability - Experience designing LLM and agent evals, monitoring AI system quality, and building observability for non-deterministic systems
Nice to Have:
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, container orchestration, deploying and operating production services
Experience with MCP or similar tool-use protocols for agent-to-service communication
Hands-on ML experience - המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603620
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are always looking for exceptional talent to join us on the journey!
Your Mission:
As an MLOps Engineer, your mission is to design, build, and operate the platforms that power our machine learning and generative AI products spanning real-time use cases such as large-scale fraud scoring, MCP & agentic workflows support. Youll create reliable CI/CD for models and Agents, robust data/feature pipelines, secure model serving, and comprehensive observability. You will also support our agentic AI ecosystem and Model Context Protocol (MCP) services so that models can safely use tools, data, and actions across.
You will partner closely with Data Scientists, Data/Platform Engineers, Product, and SRE to ensure every model from classic ML to LLM/RAG agents moves from prototype to production with strong reliability, governance, cost efficiency, and measurable business impact.
Responsibilities:
Operate & Develop ML/LLM platforms on Kubernetes + cloud (Azure; AWS/GCP ok) with Docker, Terraform, and other relevant tools
Manage object storage, GPUs, and autoscaling for training & low-latency model serving
Manage cloud environment, networking, service mesh, secrets, and policies to meet PCI-DSS and data-residency requirements
Build end-to-end CI/CD for models/agents/MCP tooling (versioning, tests, approvals)
Deliver real-time fraud/risk scoring & agent signals under strict latency SLOs.
Maintain MCP servers/clients: tool/resource definitions, versioning, quotas, isolation, access controls
Integrate agents with microservices, event streams, and rule engines; provide SLAs, tracing, and on-call runbooks
Measure operational metrics of ML/LLM (latency, throughput, cost, tokens, tool success, safety events)
Enforce governance: RBAC/ABAC, row-level security, encryption, PII/secrets management, audit trails.
Partner with DS on packaging (wheels/conda/containers), feature contracts, and reproducible experiments.
lead incident response and post-mortems.
Drive FinOps: right-sizing, GPU utilization, batching/caching, budget alerts.
Requirements:
4+ years in DevOps/MLOps/Platform roles building and operating production ML systems (batch and real-time)
Strong hands-on with Kubernetes, Docker, Terraform/IaC, and CI/CD
Practical experience with Spark/Databricks and scalable data processing
Proficiency in Python & Bash
Ability to operate DS code and optimize runtime performance.
Experience with model registries (MLflow or similar), experiment tracking, and artifact management.
Production model serving using FastAPI/Ray Serve/Triton/TorchServe, including autoscaling and rollout strategies
Monitoring and tracing with Prometheus/Grafana/OpenTelemetry; alerting tied to SLOs/SLAs
Solid understanding of PCI-DSS/GDPR considerations for data and ML systems
Experience with the Azure cloud environment is a big plus
Operating LLM/agent workloads in production (prompt/config versioning, tool execution reliability, fallback/retry policies)
Building/maintaining RAG stacks (indexing pipelines, vector DBs, retrieval evaluation, hybrid search)
Implementing guardrails (policy checks, content filters, allow/deny lists) and human-in-the-loop workflows
Experience with feature stores - Qwak Feature Store, Feast
A/B testing for models and agents, offline/online evaluation frameworks
Payments/fraud/risk domain experience; integrating ML outputs with rule engines and operational systems - Advantage
Familiarity with Databricks Unity Catalog, dbt, or similar tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595031
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Senior ML Research Engineer
Israel: Tel Aviv/ Hybrid
R&D | Full Time | Job Id: 24793
Your Impact & Responsibilities:
As a Senior ML Research Engineer, you will be responsible for the end-to-end lifecycle of large language models: from data definition and curation, through training and evaluation, to providing robust models that can be consumed by product and platform teams.
Own training and fine-tuning of LLMs / seq2seq models: Design and execute training pipelines for transformer-based models (encoder-decoder, decoder-only, retrievalaugmented, etc.), and fine-tune open-source LLMs -specific data (security content, logs, incidents, customer interactions).
Apply advanced LLM training techniques such as instruction tuning, preference / contrastive learning, LoRA / PEFT, continual pre-training, and domain adaptation where appropriate.
Work deeply with data: define data strategies with product, research and domain experts; build and maintain data pipelines for collecting, cleaning, de-duplicating and labeling large-scale text, code and semi-structured data; and design synthetic data generation and augmentation pipelines.
Build robust evaluation and experimentation frameworks: define offline metrics for LLM quality (task-specific accuracy, calibration, hallucination rate, safety, latency and cost); implement automated evaluation suites (benchmarks, regression tests, redteaming scenarios); and track model performance over time.
Scale training and inference: use distributed training frameworks (e.g. DeepSpeed, FSDP, tensor/pipeline parallelism) to efficiently train models on multi-GPU / multi-node clusters, and optimize inference performance and cost with techniques such as quantization, distillation and caching.
Collaborate closely with security researchers and data engineers to turn domain knowledge and threat intelligence into high-value training and evaluation data, and to expose your models through well-defined interfaces to downstream product and platform teams.
Requirements:
5+ years of hands-on work in machine learning / deep learning, including 3+ years focused on NLP / language models.
Proven track record of training and fine-tuning transformer-based models (BERT-style, encoder-decoder, or LLMs), not just consuming hosted APIs.
Strong programming skills in Python and at least one major deep learning framework (PyTorch preferred; TensorFlow).
Solid understanding of transformer architectures, attention mechanisms, tokenization, positional encodings, and modern training techniques.
Experience building data pipelines and tools for large-scale text / log / code processing (e.g. Spark, Beam, Dask, or equivalent frameworks).
Practical experience with ML infrastructure, such as experiment tracking (Weights & Biases, MLflow or similar), job orchestration (Airflow, Argo, Kubeflow, SageMaker, etc.), and distributed training on multi-GPU systems.
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to own research and engineering projects end-to-end: from idea, through prototype and controlled experiments, to models ready for integration by product and platform teams.
Good communication skills and the ability to work closely with non-ML stakeholders (security experts, product managers, engineers).
Nice to have:
Experience with RLHF / preference optimization, safety alignment, or other humanfeedback-in-the-loop approaches to training LLMs.
Experience with retrieval-augmented generation (RAG), dense retrieval, vector databases, and embedding training.
Background in security / cyber domains such as threat detection, malware analysis, logs, or SOC tools.
Experience with multilingual models (e.g., Hebrew + English) and cross-lingual training.
Experience in a product environment where models must meet reliability, scale, and cost constraints.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597461
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Today, more people than ever are speaking publicly about their mental health. Whether it's ourselves, our friends and family or even public figures, taking care of your behavioral health is no longer a taboo, it's vital, and it's only human.

we are on a mission to help deliver the world's most effective behavioral care through data, measurement, and personalization. Or simply put, we want to give clinicians the support they need to do the important work only they can do.

What is this opportunity?
At our company, we build a behavioral health CareOps automation platform that transforms therapy conversations into structured insights and clinical documentation. Our system uses advanced ML and LLM technologies to improve care quality, support therapists daily workflows, and reduce documentation time by over 50%.

As a Senior ML Infrastructure Engineer, you will design and build the infrastructure that powers our ML and LLM systems in production. You will develop scalable pipelines, systems, and tools that enable data scientists and AI teams to efficiently develop, test, and deploy models.
Working closely with data scientists, engineers, and product teams, you will ensure our ML capabilities are reliable, scalable, and production-ready-helping bring cutting-edge AI to improve mental health care.
This is a unique opportunity to join a startup with a real impact on thousands of peoples wellbeing and mental health, applying cutting-edge AI technologies to solve meaningful human problems.

How will you contribute?
Design and build infrastructure and backend services supporting ML and LLM systems

Develop and maintain ML training and deployment pipelines

Build tooling that enables model experimentation, versioning, and reproducibility

Implement CI/CD pipelines for ML workflows and model deployment

Support LLM deployment, prompt management, and optimization pipelines

Improve reliability, monitoring, and observability of ML systems in production

Collaborate with data scientists to productionize models and research prototypes
Ensure secure and compliant handling of sensitive healthcare data
Requirements:
What qualifications and skills will help you be successful?
5+ years of industry experience in ML Infrastructure, Backend Engineering, or related fields

Strong Python experience with production-grade systems

Experience working with cloud platforms (AWS, GCP, or Azure)

Experience with containerization technologies (Docker, Kubernetes)

Experience building CI/CD pipelines for ML systems or backend services

Experience supporting LLM deployment or ML models in production

Familiarity with model versioning, experiment tracking, and ML tooling

Some nice to haves are:
Experience with prompt engineering and prompt management

Experience with data versioning tools (DVC, Pachyderm, etc.)

Experience with MLOps platforms (MLflow, Kubeflow, etc.)
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588701
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior AI Engineer to join our Cybersecurity team in Tel Aviv. You will design, build, and productionize LLM-powered applications, multi-agent systems, and MLOps infrastructure that power our company's next-generation cybersecurity capabilities. This is a high-impact, hands-on role at the intersection of applied AI, agentic systems, and network securit
What You'll Do
Design and develop LLM-powered security features and internal AI tools, including RAG pipelines, multi-agent workflows, and prompt-engineered systems tailored for cybersecurity use cases
Architect and operate multi-agent systems in production - including agent orchestration, inter-agent communication, task delegation, and failure handling at scale
Build robust agent monitoring and observability pipelines: tracing agent execution, detecting drift or failure, alerting on anomalous behavior, and maintaining agent reliability SLAs
Build and maintain scalable MLOps infrastructure: model serving, evaluation frameworks, experiment tracking, and CI/CD for ML models
Work with internal datasets (network telemetry, security logs, threat intelligence) to fine-tune and adapt foundation models for domain-specific detection and response tasks
Partner with the Cybersecurity, R&D, and infrastructure teams to define AI-driven security features and deliver them end-to-end
Establish best practices for model observability, safety, and responsible AI deployment within the organization
Stay current with the fast-moving LLM/GenAI and agentic AI ecosystem and evaluate emerging frameworks, models, and tools for adoption.
Requirements:
Must-Have
5-8 years of software engineering experience, with at least 2-3 years focused on AI/ML engineering
Hands-on experience building production-grade LLM applications - RAG, agents, tool use, or fine-tuning
Proven experience designing and running multi-agent systems in production: orchestration patterns, agent state management, retries, and graceful degradation
Experience monitoring and observing AI agents in production - execution tracing, latency tracking, failure detection, and alerting (e.g., LangSmith, Arize, custom observability stacks)
Proficiency with agentic frameworks: LangChain, LangGraph, and/or AWS Bedrock AgentCore
Strong Python skills and comfort working across the full AI application stack
Experience designing and operating MLOps pipelines (model versioning, deployment, monitoring)
Solid understanding of transformer-based models, embeddings, and vector databases (e.g., Pinecone, Weaviate, pgvector)
Comfortable working in cloud environments (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes)
Strong problem-solving skills and ability to work autonomously in a fast-paced environment
Nice-to-Have
Background in cybersecurity - threat detection, SIEM, SOC automation, or security data analysis - a significant plus for this role
Familiarity with networking concepts (SDN, cloud-native networking, BGP, telemetry)
Experience with model evaluation and benchmarking (LLM-as-judge, RAGAS, or custom eval harnesses)
Exposure to MCP (Model Context Protocol) for tool-augmented agentic workflows
Prior experience in enterprise SaaS, networking, or telecom domains
Publications, open-source contributions, or projects in the LLM/GenAI or agentic AI space
Our Stack
Python PyTorch OpenAI / Anthropic APIs LangChain LangGraph AWS Bedrock AgentCore LangSmith Kubernetes Kafka Elasticsearch AWS PostgreSQL GitHub Jira Confluence.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595648
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
We are looking for a hands-on Tech Lead to join the Core Platform team within ML. Our engineering teams build the foundational systems behind global artifact storage, replication, and distribution - and increasingly power the next generation of AI/ML operations and governance.Our platform is the backbone for ML workloads: managing model binaries, versioning, and scalable runtime environments for ML and AI applications. This role combines deep distributed systems with modern ML infrastructure challenges such as high-throughput inference, safe model rollouts, and multi-cloud GPU efficiency. You will also help evolve core libraries and developer-facing tools, including logging, observability, and visibility components.
As a senior technical leader, you will influence architecture across squads, lead complex development efforts, and remain heavily hands-on.
As a Tech Lead in Core Platform in you will
Design and evolve components for managing and distributing ML/AI models and artifacts at scale
Extend the platform to support reliable, high-performance inference and training workflows
Lead cross-team technical initiatives and serve as a reference for distributed systems and ML infra design
Write maintainable, high-quality code in performance-critical areas.
Mentor engineers and drive strong engineering practices
Collaborate with adjacent teams to ensure seamless end-to-end ML platform behavior
Improve the reliability, efficiency, and observability of core services
Requirements:
7+ years building large-scale backend or distributed systems
Strong foundation in distributed systems (consistency, replication, concurrency, fault tolerance)
Proficiency in Java / Go or similar languages
Hands-on experience with high-performance, scalable, and reliable systems
Ability to lead design discussions and influence technical direction across teams
Curiosity and willingness to work with ML systems and workload patterns
Experience with Kubernetes, container orchestration, or cloud-native infrastructure
Thrive in a collaborative, ownership-driven engineering culture
Bonus Points
Experience with ML model serving, vector DBs, model versioning, or GPU orchestration
Background in secure software supply chain workflows
Strong performance debugging and optimization skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8571673
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Engineer with a data engineering background to join our growing ML Platform team. This is a great opportunity, whether you have experience with ML and are looking for a ML focused product or are an experienced Data Engineer looking to enter the world of ML. Together well provide tools to develop more effective models, get them into production faster, and ensure that they continue to perform well over time.
ML is central to our work. It enables us to process billions of $ worth e-commerce transactions, make decisions in real time, identify fraud rings, and quickly detect new attack methods. Precision is crucial - bad decisions by our models cost us directly and put money into the pockets of fraudsters.
Our adoption by merchants around the world provides us with billions of fresh data points each day. Our team of data scientists, analysts, and cyber intelligence specialists continually identify new signals, engineer new features, and research new models. But as the volume of data and the number and complexity of models grows, so do the engineering challenges.
If this kind of working environment sounds exciting to you, if you understand that Engineering is about building the most effective and elegant solution within a given set of constraints - consider applying for this position.
Why should you join us?
Youll be part of a highly proficient engineering team that is a focal point for all ML engineering activity, striving to constantly bring innovation and leverage ML capabilities across all company teams and products.
This role presents a unique opportunity to enter the ML domain. For those already experienced in ML infrastructure, it offers the chance to grow within a team that specializes in high-scale, Big Data and ML systems.
What you will be doing:
Designing, building, and maintaining the ML infrastructure that allows our models to make billions of real-time decisions every year.
Building a platform that enables managing a full ML model lifecycle - from researching to training, deploying, and serving predictions in real-time.
Building distributed data processing pipelines to support model development.
Acting as a consultant to researchers, data scientists, and expert analysts and enabling them to research new models faster and with greater precision by providing cutting-edge tooling.
Expanding our ML infrastructure to make it scalable, quick, and efficient to bring diverse models to production and to monitor their performance and drift over time.
Expanding the pool of internal customers able to use ML. Work with them to understand their needs and help them make the most of the infrastructure that well provide.
Acting as an advocate for MLOps, continually improving our processes, and raising our standards.
Requirements:
4+ years experience with large-scale data processing, ideally with Apache Spark.
5+ years developing complex software projects with at least one of general-purpose languages (preferably Python, but not a must)
Backend and server-side development experience of complex, highly scalable systems
Experienced with machine learning concepts and frameworks.
Motivation to understand the needs of internal users, provide them with great tooling, and teach them how to use it.
Experience working with public clouds (AWS / GCP / Azure)
Fluent in written and spoken English
Itd be really cool if you also:
Are familiar with Databricks or Airflow.
Are comfortable in a containerized environment.
Have experience with maintaining highly available, low latency, real-time services.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588937
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 11 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a motivated and experienced Machine Learning Platform Engineer to join our dynamic team.

In this role, you will collaborate closely with data scientists and DevOps professionals to design and build the infrastructure, ecosystem libraries, and pipelines that power our data science initiatives. You will take ownership of model development, monitoring, and maintenance, working hand-in-hand with data scientists on a daily basis.

If youre passionate about AI, machine learning, and writing high-quality code-and are eager to contribute to innovative, impactful do good projects in the digital health space-wed love to hear from you!

What you'll be doing:
Design, develop, and maintain our machine learning ecosystem libraries.
Build and manage data science code, Docker images, and Kubeflow Pipelines (KFP).
Create and maintain CI scripts to ensure seamless integration and delivery.
Conduct thorough code reviews to uphold high-quality standards.
Collaborate closely with data scientists, understanding and addressing their evolving needs.
Work alongside software developers to seamlessly integrate machine learning models into production systems.
Stay current with the latest advancements in machine learning, leveraging innovative techniques to enhance the companys products and services.
Requirements:
What we're looking for:
5+ years in software engineering with experience in backend/platform roles.
5+ years of experience with Python.
Proficiency in another language, such as C++, Rust, Java, or Go, is an advantage.
2+ years of experience working with cloud platforms such as Google Cloud (preferred), Azure, or AWS, including familiarity with ML workflow frameworks like KFP or Vertex Pipelines.
Solid experience in ML/AI development (a must).
Experience with inference optimization (vLLM) and fine-tuning (Axolotl/Huggingface).
Expertise with transformers, PyTorch, CUDA, and other low-level ML libraries.
Familiarity with Docker and Kubernetes.
Excellent problem-solving skills and a proactive attitude, with a strong focus on code quality and optimization.
Collaborative mindset with the ability to work closely with cross-functional teams. Strong communication and teamwork skills are essential.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602543
סגור
שירות זה פתוח ללקוחות VIP בלבד