דרושים » תוכנה » Senior Machine Learning Engineer- LLMs & Self-Hosted AI

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 17 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Machine Learning Engineer to lead our transition from on-demand, third-party LLM APIs to a fully self-hosted, scalable model ecosystem.
Our core product is an advanced, agentic support chatbot capable of complex reasoning, API tool calling, database lookups, and orchestrating specialized Small Language Models (SLMs) for targeted NLP tasks. As we scale, our current deployment infrastructure (AWS SageMaker) is becoming unsustainable. You will be responsible for architecting, deploying, and optimizing an infrastructure capable of supporting 50 to 100 distinct models ranging from 100M to 70B parameters.
What Youll Do:
Inference Optimization: Deploy and manage large-scale models using high-performance inference engines (like vLLM) to ensure low latency and high throughput for our agentic chatbot.
Agentic Workflows: Develop and refine the chatbot's agentic capabilities, ensuring reliable tool-use, routing, and interactions between massive LLMs and specialized SLMs.
Model Fine-Tuning: Design and execute fine-tuning strategies to improve model accuracy on specific domain tasks and tool-calling execution.
Rigorous Evaluation: Build comprehensive offline and online evaluation frameworks to constantly measure model performance and business impact through structured A/B testing.
Requirements:
Core Engineering & AI Frameworks
Strong proficiency in Python and Bash scripting.
Deep experience with PyTorch and the Hugging Face ecosystem.
Experience using AI coding assistants natively in the terminal, specifically Claude Code, to accelerate development workflows.
LLMs, Inference & Agents
Proven experience deploying models using vLLM, TGI, or similar high-performance inference servers.
Strong fundamental understanding of LLM architectures, attention mechanisms, and generation parameters.
Hands-on experience building Agentic systems (ReAct, function/tool calling, RAG).
Expertise in fine-tuning strategies (e.g., SFT, RLHF, DPO) and parameter-efficient techniques (PEFT/LoRA).
Statistics & Model Evaluation
Offline Metrics: Deep understanding of classification/summarization metrics (Precision, Recall, F1, AUC) and retrieval metrics (MRR, NDCG, Precision/Recall @ k).
Online Metrics & A/B Testing: Strong statistical foundation to design and analyze A/B tests safely, including the use of t-tests, Mann-Whitney U tests, and bootstrapping techniques.
Bonus Points
Containerization & Orchestration: Experience with Ray for orchestrating large-scale model deployments across multi-GPU clusters.
Model Quantization: Experience with memory optimization techniques like AWQ, GPTQ, GGUF, or FlashAttention to fit 70B models efficiently onto hardware.
API Development: Proficiency in building robust, asynchronous microservices using FastAPI to serve model requests.
Knowledge of Data Engineering principles: dataset collection, cleaning, processing, and scalable storage.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8618171
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Senior ML Research Engineer
Israel: Tel Aviv/ Hybrid
R&D | Full Time | Job Id: 24793
Your Impact & Responsibilities:
As a Senior ML Research Engineer, you will be responsible for the end-to-end lifecycle of large language models: from data definition and curation, through training and evaluation, to providing robust models that can be consumed by product and platform teams.
Own training and fine-tuning of LLMs / seq2seq models: Design and execute training pipelines for transformer-based models (encoder-decoder, decoder-only, retrievalaugmented, etc.), and fine-tune open-source LLMs -specific data (security content, logs, incidents, customer interactions).
Apply advanced LLM training techniques such as instruction tuning, preference / contrastive learning, LoRA / PEFT, continual pre-training, and domain adaptation where appropriate.
Work deeply with data: define data strategies with product, research and domain experts; build and maintain data pipelines for collecting, cleaning, de-duplicating and labeling large-scale text, code and semi-structured data; and design synthetic data generation and augmentation pipelines.
Build robust evaluation and experimentation frameworks: define offline metrics for LLM quality (task-specific accuracy, calibration, hallucination rate, safety, latency and cost); implement automated evaluation suites (benchmarks, regression tests, redteaming scenarios); and track model performance over time.
Scale training and inference: use distributed training frameworks (e.g. DeepSpeed, FSDP, tensor/pipeline parallelism) to efficiently train models on multi-GPU / multi-node clusters, and optimize inference performance and cost with techniques such as quantization, distillation and caching.
Collaborate closely with security researchers and data engineers to turn domain knowledge and threat intelligence into high-value training and evaluation data, and to expose your models through well-defined interfaces to downstream product and platform teams.
Requirements:
5+ years of hands-on work in machine learning / deep learning, including 3+ years focused on NLP / language models.
Proven track record of training and fine-tuning transformer-based models (BERT-style, encoder-decoder, or LLMs), not just consuming hosted APIs.
Strong programming skills in Python and at least one major deep learning framework (PyTorch preferred; TensorFlow).
Solid understanding of transformer architectures, attention mechanisms, tokenization, positional encodings, and modern training techniques.
Experience building data pipelines and tools for large-scale text / log / code processing (e.g. Spark, Beam, Dask, or equivalent frameworks).
Practical experience with ML infrastructure, such as experiment tracking (Weights & Biases, MLflow or similar), job orchestration (Airflow, Argo, Kubeflow, SageMaker, etc.), and distributed training on multi-GPU systems.
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to own research and engineering projects end-to-end: from idea, through prototype and controlled experiments, to models ready for integration by product and platform teams.
Good communication skills and the ability to work closely with non-ML stakeholders (security experts, product managers, engineers).
Nice to have:
Experience with RLHF / preference optimization, safety alignment, or other humanfeedback-in-the-loop approaches to training LLMs.
Experience with retrieval-augmented generation (RAG), dense retrieval, vector databases, and embedding training.
Background in security / cyber domains such as threat detection, malware analysis, logs, or SOC tools.
Experience with multilingual models (e.g., Hebrew + English) and cross-lingual training.
Experience in a product environment where models must meet reliability, scale, and cost constraints.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597461
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 17 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, we aren't building a single, generic chatbot. We are building a Composable AI Microservice Architecture, a swarm of hundreds of hyper-specialized AI services, each meticulously "programmed" to solve small, focused tasks with high precision. This fleet powers Ava, our AI support engine, and a suite of cutting-edge generative tools for travel and expense management.
As a Senior AI Ops / MLOps Engineer, you are the architect of the platform that makes this scale possible. You will move beyond traditional MLOps to manage a "factory" of Language Models. Your challenge is one of orchestration and standardization, ensuring that every service in the swarm meets a rigorous bar for quality, reliability, and cost-efficiency.
What You'll Do
Orchestrate the AI Fleet: Build and own the runtime environment for 100+ specialized AI services. Manage model routing, context versioning, and standardized memory/history stores.
High-Density Inference Optimization: Design and implement SageMaker Multi-Model Endpoints (MME) and Inference Components to serve multiple tuned SLMs per GPU, maximizing hardware utilization while minimizing latency.
Deterministic Service Excellence: Treat reliability as a layered engineering problem. Build deterministic "shells" around probabilistic LM outputs, prioritizing data-layer validation and strict serialization.
Automated Evaluation & Observability: Implement "LLM-as-a-judge" patterns and automated benchmarking to detect semantic drift and hallucinations across the fleet before they impact the user.
Standardize the Workflow: Obsess over building reusable patterns and Terraform-based infrastructure that eliminate "snowflake" configurations, allowing us to deploy new specialized AI tasks in minutes.
Agency Strategy: Partner with AI Researchers to find the "Goldilocks zone" for agentic autonomy-balancing the flexibility of LLM tool-use with the precision required for production stability.
Requirements:
Experience: 5+ years in SRE, Platform Engineering, or MLOps, with at least 2 years focused on deploying LLMs/SLMs in production environments.
SageMaker Mastery: Deep hands-on expertise with AWS SageMaker, specifically configuring Multi-Model Endpoints (MME), Inference Components, and GPU-backed instances (G5/P4).
SLM Expertise: Proven experience with Small Language Models (e.g., Mistral, Llama 3, Phi) and parameter-efficient fine-tuning (PEFT) deployment strategies like LoRA/QLoRA.
Technical Stack: * Languages: Strong proficiency in Python and Terraform.
Orchestration: Experience with Docker, Kubernetes (EKS), or AWS ECS/Fargate.
Data: Familiarity with Snowflake and Vector Databases.
The "AI Ops" Mindset: You understand that AI at scale is a statistical challenge. You are comfortable debugging issues at the data/serialization layer rather than defaulting to prompt tweaks.
CI/CD & Automation: Experience building robust pipelines (Jenkins, GitHub Actions) for non-deterministic software, including automated "eval" stages.
Education: BS or MS in Computer Science, Engineering, Mathematics, or a related technical field.
Must have
Python, Terraform, Sagemaker.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8618201
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are always looking for exceptional talent to join us on the journey!
Your Mission:
As an MLOps Engineer, your mission is to design, build, and operate the platforms that power our machine learning and generative AI products spanning real-time use cases such as large-scale fraud scoring, MCP & agentic workflows support. Youll create reliable CI/CD for models and Agents, robust data/feature pipelines, secure model serving, and comprehensive observability. You will also support our agentic AI ecosystem and Model Context Protocol (MCP) services so that models can safely use tools, data, and actions across.
You will partner closely with Data Scientists, Data/Platform Engineers, Product, and SRE to ensure every model from classic ML to LLM/RAG agents moves from prototype to production with strong reliability, governance, cost efficiency, and measurable business impact.
Responsibilities:
Operate & Develop ML/LLM platforms on Kubernetes + cloud (Azure; AWS/GCP ok) with Docker, Terraform, and other relevant tools
Manage object storage, GPUs, and autoscaling for training & low-latency model serving
Manage cloud environment, networking, service mesh, secrets, and policies to meet PCI-DSS and data-residency requirements
Build end-to-end CI/CD for models/agents/MCP tooling (versioning, tests, approvals)
Deliver real-time fraud/risk scoring & agent signals under strict latency SLOs.
Maintain MCP servers/clients: tool/resource definitions, versioning, quotas, isolation, access controls
Integrate agents with microservices, event streams, and rule engines; provide SLAs, tracing, and on-call runbooks
Measure operational metrics of ML/LLM (latency, throughput, cost, tokens, tool success, safety events)
Enforce governance: RBAC/ABAC, row-level security, encryption, PII/secrets management, audit trails.
Partner with DS on packaging (wheels/conda/containers), feature contracts, and reproducible experiments.
lead incident response and post-mortems.
Drive FinOps: right-sizing, GPU utilization, batching/caching, budget alerts.
Requirements:
4+ years in DevOps/MLOps/Platform roles building and operating production ML systems (batch and real-time)
Strong hands-on with Kubernetes, Docker, Terraform/IaC, and CI/CD
Practical experience with Spark/Databricks and scalable data processing
Proficiency in Python & Bash
Ability to operate DS code and optimize runtime performance.
Experience with model registries (MLflow or similar), experiment tracking, and artifact management.
Production model serving using FastAPI/Ray Serve/Triton/TorchServe, including autoscaling and rollout strategies
Monitoring and tracing with Prometheus/Grafana/OpenTelemetry; alerting tied to SLOs/SLAs
Solid understanding of PCI-DSS/GDPR considerations for data and ML systems
Experience with the Azure cloud environment is a big plus
Operating LLM/agent workloads in production (prompt/config versioning, tool execution reliability, fallback/retry policies)
Building/maintaining RAG stacks (indexing pipelines, vector DBs, retrieval evaluation, hybrid search)
Implementing guardrails (policy checks, content filters, allow/deny lists) and human-in-the-loop workflows
Experience with feature stores - Qwak Feature Store, Feast
A/B testing for models and agents, offline/online evaluation frameworks
Payments/fraud/risk domain experience; integrating ML outputs with rule engines and operational systems - Advantage
Familiarity with Databricks Unity Catalog, dbt, or similar tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595031
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Machine Learning Engineer - AI Coding Agents & LLM Infrastructure
Tel Aviv
Full-time
A bit about us:
We are redefining how software gets built. Trusted by over 1M+ developers, we build AI-first developer experiences powered by state-of-the-art coding agents and code reasoning models. With support for 30+ programming languages and 15+ IDEs, our platform is pushing the limits of LLM-based software engineering - enabling teams to design, write, review, and ship code faster than ever. Were committed to advancing code-native AI models, multi-agent systems, agent orchestration frameworks, memory, and autonomous dev tooling to empower developers at every step of the software lifecycle.
Were growing fast, and our team is passionate about pushing AI engineering to new heights - solving complex problems in LLM training, inference optimization, reasoning, and agent orchestration at scale.
About the Role:
As a Machine Learning Engineer, youll work on cutting-edge
code-focused LLMs and AI agent systems
that power our next-generation developer platform. Youll be at the center of research, model training, and productionization of intelligent systems that understand software deeply, collaborate with developers, and help automate engineering workflows end-to-end. Your work will immediately impact millions of engineers worldwide.
Responsibilities:
Push LLM Innovation: Research, design, and fine-tune domain-specific LLMs for code generation, refactoring, debugging, and multi-turn reasoning.
Agent-Oriented Development: Build multi-agent coding systems that integrate retrieval-augmented generation (RAG), code execution, testing, and tool use to create autonomous, context-aware coding workflows.
Production-Grade AI: Own the training-to-inference pipeline for large code models-optimize inference with quantization, distillation, and caching techniques.
Rapid Experimentation: Prototype and validate ideas quickly; leverage reinforcement learning, human feedback, and synthetic data generation to push accuracy and reasoning.
Cross-Functional Collaboration: Partner with product, engineering, and design teams to ship AI-powered features that help developers focus on high-impact work.
Scale the Platform: Contribute to distributed training, scalable serving systems, and GPU/TPU-efficient architectures for ultra-low-latency developer tools.
Requirements:
2+ years of hands-on experience designing, training, and deploying machine-learning models
M.Sc. or higher in Computer Science / Mathematics / Statistics or equivalent from a university, or B.Sc. with strong hands-on ML experience
Practical experience with Natural Language Processing (NLP) and LLMs
Experience with data acquisition, data cleaning, and data pipelines
A passion for building products and helping people, both customers and colleagues
All-around team player, fast, self-learning individual
Nice to have:
3+ years of development experience with a passion for excellence
Experience building AI coding assistants, code reasoning models, or dev-focused LLM agents.
Familiarity with RAG, function-calling, and tool-using LLMs.
Knowledge of model optimizations (quantization, distillation, LoRA, pruning).
Startup or product-driven ML experience, especially in high-scale, latency-sensitive environments.
Contributions to open-source AI or developer tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608813
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior ML Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the ML platform that turns research into reliable, production-grade intelligence. You care about reproducibility, low-friction experimentation, and infrastructure that earns the trust of the scientists and researchers who depend on it daily. You'll architect and ship our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - turning models into production capabilities across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments.
If you want to make a meaningful impact, join our mission and build the ML platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Build and operate ML training infrastructure - distributed training pipelines, compute scheduling, and reproducible experiment workflows that data scientists rely on daily.
Own model serving and inference systems - packaging, deployment, autoscaling, A/B testing, canary rollouts, and latency/cost optimization for production models.
Run feature stores, model registries, and dataset versioning - enabling self-serve feature engineering, model lineage, and reproducible experiments across teams.
Build experiment tracking and evaluation infrastructure - automated evals, comparison dashboards, drift detection, and monitoring that give teams visibility into model behavior and performance.
Build and maintain production pipelines for training, fine-tuning workflows, and serving domain models - owning reliability, reproducibility, and scale.
Build and maintain the monitoring and observability layer - model performance tracking, data and prediction drift detection, data quality validation, and alerting.
Improve performance and cost across the ML stack - training throughput, inference latency, batch vs. real-time tradeoffs, and compute cost management.
Ship shared tooling - libraries, templates, CI/CD for models, IaC, and runbooks - while collaborating across Data Platform, AI, Data Science, Engineering, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
5+ years in software engineering, with 2+ years focused on ML infrastructure, MLOps, or data-intensive systems
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603632
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior AI Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the agentic AI platform that turns LLMs into reliable, production-grade capabilities. You care about clean APIs, well-defined service boundaries, and systems that teams can build on with confidence. Dream is AI-first across the board - every team builds and operates agents. You'll architect and ship the platform that makes this possible: agent orchestration frameworks, LLM gateways, evaluation pipelines, tool-calling infrastructure, and retrieval systems. Without this platform, agents don't ship - you own the layer that turns AI research into Sovereign AI products, deployed across cloud and on-prem environments.
If you want to make a meaningful impact, join our mission and build the agentic AI platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Design and build agentic systems - single and multi-agent workflows with planning, memory, context engineering, and tool use - for both internal automation and product-facing autonomous capabilities operating over long time horizons.
Build and operate the AI platform layer - LLM gateways, prompt management, structured output handling, tool-calling infrastructure, and cost/latency optimization - deployed on Kubernetes, consumed by every team for their agentic work.
Own the agent framework layer - orchestration primitives, execution environments, state management, and sandboxed tool execution - giving every team the building blocks to create and operate their own agents.
Build evaluation infrastructure that gives teams confidence in agent behavior - automated LLM and agent evals for quality, correctness, safety, latency, cost, and regressions, including human-in-the-loop oversight for mission-critical workflows.
Productionize and harden backend services (APIs, gRPC, async workers) that integrate LLMs - with proper error handling, retries, circuit breakers, and high-availability patterns.
Own RAG pipelines and retrieval systems - indexing, chunking, embedding, vector database management, filtering, and relevance tuning for production retrieval.
Optimize performance and cost across the AI stack - model routing, caching, batching, and inference cost management.
Ship shared tooling - libraries, SDKs, agent templates, and documentation - while working closely with ML Platform, Data Platform, DevOps, and other teams across the Applied AI Engineering group. Own architecture, documentation, and operations end-to-end.
דרישות:
5+ years in backend or distributed systems engineering, with 2+ years focused on production systems that integrate AI/ML models or LLMs.
Engineering craft - Strong Python, Go, or Java, system architecture, API design, testing, and secure coding practices.
Agentic systems - Experience designing and building agent orchestration, tool-use systems, and autonomous workflows; familiarity with frameworks like LangGraph or similar, or having built equivalent from scratch
Backend engineering - Experience building production APIs and services (FastAPI or similar); async programming, service architecture, high-availability, and reliability patterns (retries, circuit breakers, backpressure)
LLM integration - Hands-on experience integrating LLMs via SDKs and APIs; context engineering, structured outputs, tool calling, and model routing
RAG & retrieval - Experience with embedding pipelines, vector databases (e.g., Milvus, Qdrant, Pinecone), chunking strategies, and relevance tuning
Evaluation & observability - Experience designing LLM and agent evals, monitoring AI system quality, and building observability for non-deterministic systems
Nice to Have:
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, container orchestration, deploying and operating production services
Experience with MCP or similar tool-use protocols for agent-to-service communication
Hands-on ML experience - המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603620
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior AI Engineer to join our Cybersecurity team in Tel Aviv. You will design, build, and productionize LLM-powered applications, multi-agent systems, and MLOps infrastructure that power our company's next-generation cybersecurity capabilities. This is a high-impact, hands-on role at the intersection of applied AI, agentic systems, and network securit
What You'll Do
Design and develop LLM-powered security features and internal AI tools, including RAG pipelines, multi-agent workflows, and prompt-engineered systems tailored for cybersecurity use cases
Architect and operate multi-agent systems in production - including agent orchestration, inter-agent communication, task delegation, and failure handling at scale
Build robust agent monitoring and observability pipelines: tracing agent execution, detecting drift or failure, alerting on anomalous behavior, and maintaining agent reliability SLAs
Build and maintain scalable MLOps infrastructure: model serving, evaluation frameworks, experiment tracking, and CI/CD for ML models
Work with internal datasets (network telemetry, security logs, threat intelligence) to fine-tune and adapt foundation models for domain-specific detection and response tasks
Partner with the Cybersecurity, R&D, and infrastructure teams to define AI-driven security features and deliver them end-to-end
Establish best practices for model observability, safety, and responsible AI deployment within the organization
Stay current with the fast-moving LLM/GenAI and agentic AI ecosystem and evaluate emerging frameworks, models, and tools for adoption.
Requirements:
Must-Have
5-8 years of software engineering experience, with at least 2-3 years focused on AI/ML engineering
Hands-on experience building production-grade LLM applications - RAG, agents, tool use, or fine-tuning
Proven experience designing and running multi-agent systems in production: orchestration patterns, agent state management, retries, and graceful degradation
Experience monitoring and observing AI agents in production - execution tracing, latency tracking, failure detection, and alerting (e.g., LangSmith, Arize, custom observability stacks)
Proficiency with agentic frameworks: LangChain, LangGraph, and/or AWS Bedrock AgentCore
Strong Python skills and comfort working across the full AI application stack
Experience designing and operating MLOps pipelines (model versioning, deployment, monitoring)
Solid understanding of transformer-based models, embeddings, and vector databases (e.g., Pinecone, Weaviate, pgvector)
Comfortable working in cloud environments (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes)
Strong problem-solving skills and ability to work autonomously in a fast-paced environment
Nice-to-Have
Background in cybersecurity - threat detection, SIEM, SOC automation, or security data analysis - a significant plus for this role
Familiarity with networking concepts (SDN, cloud-native networking, BGP, telemetry)
Experience with model evaluation and benchmarking (LLM-as-judge, RAGAS, or custom eval harnesses)
Exposure to MCP (Model Context Protocol) for tool-augmented agentic workflows
Prior experience in enterprise SaaS, networking, or telecom domains
Publications, open-source contributions, or projects in the LLM/GenAI or agentic AI space
Our Stack
Python PyTorch OpenAI / Anthropic APIs LangChain LangGraph AWS Bedrock AgentCore LangSmith Kubernetes Kafka Elasticsearch AWS PostgreSQL GitHub Jira Confluence.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595648
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required ML Engineering Team Lead - Applied AI Engineering Group
The Dream Job
It starts with you - a technical leader driven to build both the ML platform and the engineering team behind it. You care about reliable infrastructure, great developer experience, and growing engineers through real ownership. You'll set the technical direction for our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - shaping how models reach production across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments. You stay close enough to the codebase to debug production issues, unblock your engineers, and make sound architecture calls.
If you want to make a meaningful impact, join our mission and lead the team that builds the ML platform driving Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Set technical direction for the ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - through RFCs, prototypes, design reviews, and build-vs-buy decisions
Lead and grow a team of ML Engineers - hire, mentor, pair on hard problems, and raise the bar through code and design reviews
Contribute to critical systems, debug production issues, and maintain deep context on the codebase to inform technical decisions
Own operational excellence for model serving - set and enforce SLAs, run capacity planning, and keep compute costs predictable
Establish ML engineering standards - reproducible experiments, automated evals, model packaging, CI/CD for models, and observability
Support the full lifecycle of our models - from training on domain-specific data to low-latency inference powering production systems
Work closely with Data Platform, AI, Data Science, and Product teams - translate business priorities into engineering work and manage cross-team dependencies
Measure and improve developer experience - deploy friction, onboarding time, CI turnaround - as seriously as model performance.
Requirements:
6+ years in software engineering, ML engineering, or platform engineering, with hands-on experience building and operating ML infrastructure at scale.
2+ years leading an engineering team - hiring, mentoring, conducting design reviews, and shipping alongside your team
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603603
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a motivated and experienced Machine Learning Platform Engineer to join our dynamic team.

In this role, you will collaborate closely with data scientists and DevOps professionals to design and build the infrastructure, ecosystem libraries, and pipelines that power our data science initiatives. You will take ownership of model development, monitoring, and maintenance, working hand-in-hand with data scientists on a daily basis.

If youre passionate about AI, machine learning, and writing high-quality code-and are eager to contribute to innovative, impactful do good projects in the digital health space-wed love to hear from you!

What you'll be doing:
Design, develop, and maintain our machine learning ecosystem libraries.
Build and manage data science code, Docker images, and Kubeflow Pipelines (KFP).
Create and maintain CI scripts to ensure seamless integration and delivery.
Conduct thorough code reviews to uphold high-quality standards.
Collaborate closely with data scientists, understanding and addressing their evolving needs.
Work alongside software developers to seamlessly integrate machine learning models into production systems.
Stay current with the latest advancements in machine learning, leveraging innovative techniques to enhance the companys products and services.
Requirements:
What we're looking for:
5+ years in software engineering with experience in backend/platform roles.
5+ years of experience with Python.
Proficiency in another language, such as C++, Rust, Java, or Go, is an advantage.
2+ years of experience working with cloud platforms such as Google Cloud (preferred), Azure, or AWS, including familiarity with ML workflow frameworks like KFP or Vertex Pipelines.
Solid experience in ML/AI development (a must).
Experience with inference optimization (vLLM) and fine-tuning (Axolotl/Huggingface).
Expertise with transformers, PyTorch, CUDA, and other low-level ML libraries.
Familiarity with Docker and Kubernetes.
Excellent problem-solving skills and a proactive attitude, with a strong focus on code quality and optimization.
Collaborative mindset with the ability to work closely with cross-functional teams. Strong communication and teamwork skills are essential.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602543
סגור
שירות זה פתוח ללקוחות VIP בלבד