דרושים » הנדסה » Senior AI Ops / MLOps Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 17 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, we aren't building a single, generic chatbot. We are building a Composable AI Microservice Architecture, a swarm of hundreds of hyper-specialized AI services, each meticulously "programmed" to solve small, focused tasks with high precision. This fleet powers Ava, our AI support engine, and a suite of cutting-edge generative tools for travel and expense management.
As a Senior AI Ops / MLOps Engineer, you are the architect of the platform that makes this scale possible. You will move beyond traditional MLOps to manage a "factory" of Language Models. Your challenge is one of orchestration and standardization, ensuring that every service in the swarm meets a rigorous bar for quality, reliability, and cost-efficiency.
What You'll Do
Orchestrate the AI Fleet: Build and own the runtime environment for 100+ specialized AI services. Manage model routing, context versioning, and standardized memory/history stores.
High-Density Inference Optimization: Design and implement SageMaker Multi-Model Endpoints (MME) and Inference Components to serve multiple tuned SLMs per GPU, maximizing hardware utilization while minimizing latency.
Deterministic Service Excellence: Treat reliability as a layered engineering problem. Build deterministic "shells" around probabilistic LM outputs, prioritizing data-layer validation and strict serialization.
Automated Evaluation & Observability: Implement "LLM-as-a-judge" patterns and automated benchmarking to detect semantic drift and hallucinations across the fleet before they impact the user.
Standardize the Workflow: Obsess over building reusable patterns and Terraform-based infrastructure that eliminate "snowflake" configurations, allowing us to deploy new specialized AI tasks in minutes.
Agency Strategy: Partner with AI Researchers to find the "Goldilocks zone" for agentic autonomy-balancing the flexibility of LLM tool-use with the precision required for production stability.
Requirements:
Experience: 5+ years in SRE, Platform Engineering, or MLOps, with at least 2 years focused on deploying LLMs/SLMs in production environments.
SageMaker Mastery: Deep hands-on expertise with AWS SageMaker, specifically configuring Multi-Model Endpoints (MME), Inference Components, and GPU-backed instances (G5/P4).
SLM Expertise: Proven experience with Small Language Models (e.g., Mistral, Llama 3, Phi) and parameter-efficient fine-tuning (PEFT) deployment strategies like LoRA/QLoRA.
Technical Stack: * Languages: Strong proficiency in Python and Terraform.
Orchestration: Experience with Docker, Kubernetes (EKS), or AWS ECS/Fargate.
Data: Familiarity with Snowflake and Vector Databases.
The "AI Ops" Mindset: You understand that AI at scale is a statistical challenge. You are comfortable debugging issues at the data/serialization layer rather than defaulting to prompt tweaks.
CI/CD & Automation: Experience building robust pipelines (Jenkins, GitHub Actions) for non-deterministic software, including automated "eval" stages.
Education: BS or MS in Computer Science, Engineering, Mathematics, or a related technical field.
Must have
Python, Terraform, Sagemaker.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8618201
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 22 שעות
דרושים בCrowdStrike
Location: Tel Aviv-Yafo
Job Type: Full Time
CrowdStrike's Data Science Studio is seeking a pioneering Senior MLOps Engineer to establish and lead our MLOps function from the ground up. As the first MLOps engineer in the studio, you will play a foundational role in shaping how we build, deploy, and scale machine learning systems that protect thousands of organizations worldwide.

This is a unique opportunity to define the technical strategy, influence the technology stack, and architect the infrastructure that will power our AI/ML-driven security solutions for years to come.

This role combines strategic vision with hands-on execution. You'll work at the intersection of data science, engineering, and production operations - building production-grade systems that operate at immense scale while collaborating closely with highly technical data scientists and ML engineering teams across CrowdStrike.

What You'll Do:
- Architect MLOps infrastructure from the ground up: Design and implement the foundational MLOps platform, establishing best practices, tooling, and workflows that will scale with our growing data science initiatives
- Define technology strategy: Evaluate, select, and integrate MLOps technologies and platforms that best serve our needs - from experiment tracking and model versioning to deployment pipelines and monitoring systems
- Build production-grade ML pipelines: Develop robust, scalable pipelines for model training, validation, deployment, and monitoring that handle massive data volumes and ensure reliability in production
- Enable data scientist productivity: Create tools, frameworks, and automation that empower data scientists to move quickly from research to production while maintaining high quality and reliability standards
- Establish monitoring and observability: Implement comprehensive monitoring, logging, and alerting systems to ensure ML models perform optimally in production and issues are detected proactively
- Drive MLOps culture and practices: Champion best practices in ML engineering, CI/CD for ML, model governance, and reproducibility across the data science organization
- Collaborate cross-functionally: Partner closely with data scientists to understand their workflows and pain points, and work with ML engineering teams to ensure seamless integration with broader platform capabilities
 -Scale for the future: Design systems with scalability, security, and maintainability in mind, anticipating the needs of a rapidly growing ML portfolio
Requirements:
- 6+ years of experience in MLOps, ML engineering, DevOps, or related infrastructure roles with focus on machine learning systems
- Production ML systems expertise: Proven track record of building and operating ML systems at scale in production environments
- Strong infrastructure and automation skills: Deep knowledge of cloud platforms (AWS, Azure, or GCP), containerization (Docker, Kubernetes), and infrastructure-as-code (Terraform, CloudFormation)
- ML pipeline proficiency: Hands-on experience with ML workflow orchestration tools (e.g., Airflow, Kubeflow, MLflow, Metaflow) and building end-to-end ML pipelines
- Programming excellence: Strong coding skills in Python; experience with additional languages is a plus
- CI/CD and DevOps practices: Expertise in building automated deployment pipelines, version control, and modern DevOps methodologies
- Strategic and hands-on balance: Ability to think architecturally about long-term solutions while rolling up your sleeves to implement them
- Collaborative mindset: Excellent communication skills and ability to work effectively with data scientists, engineers, and stakeholders with varying technical backgrounds
- Startup mentality: Comfort with ambiguity and ability to build from scratch in a fast-paced environment
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8611396
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior AI Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the agentic AI platform that turns LLMs into reliable, production-grade capabilities. You care about clean APIs, well-defined service boundaries, and systems that teams can build on with confidence. Dream is AI-first across the board - every team builds and operates agents. You'll architect and ship the platform that makes this possible: agent orchestration frameworks, LLM gateways, evaluation pipelines, tool-calling infrastructure, and retrieval systems. Without this platform, agents don't ship - you own the layer that turns AI research into Sovereign AI products, deployed across cloud and on-prem environments.
If you want to make a meaningful impact, join our mission and build the agentic AI platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Design and build agentic systems - single and multi-agent workflows with planning, memory, context engineering, and tool use - for both internal automation and product-facing autonomous capabilities operating over long time horizons.
Build and operate the AI platform layer - LLM gateways, prompt management, structured output handling, tool-calling infrastructure, and cost/latency optimization - deployed on Kubernetes, consumed by every team for their agentic work.
Own the agent framework layer - orchestration primitives, execution environments, state management, and sandboxed tool execution - giving every team the building blocks to create and operate their own agents.
Build evaluation infrastructure that gives teams confidence in agent behavior - automated LLM and agent evals for quality, correctness, safety, latency, cost, and regressions, including human-in-the-loop oversight for mission-critical workflows.
Productionize and harden backend services (APIs, gRPC, async workers) that integrate LLMs - with proper error handling, retries, circuit breakers, and high-availability patterns.
Own RAG pipelines and retrieval systems - indexing, chunking, embedding, vector database management, filtering, and relevance tuning for production retrieval.
Optimize performance and cost across the AI stack - model routing, caching, batching, and inference cost management.
Ship shared tooling - libraries, SDKs, agent templates, and documentation - while working closely with ML Platform, Data Platform, DevOps, and other teams across the Applied AI Engineering group. Own architecture, documentation, and operations end-to-end.
דרישות:
5+ years in backend or distributed systems engineering, with 2+ years focused on production systems that integrate AI/ML models or LLMs.
Engineering craft - Strong Python, Go, or Java, system architecture, API design, testing, and secure coding practices.
Agentic systems - Experience designing and building agent orchestration, tool-use systems, and autonomous workflows; familiarity with frameworks like LangGraph or similar, or having built equivalent from scratch
Backend engineering - Experience building production APIs and services (FastAPI or similar); async programming, service architecture, high-availability, and reliability patterns (retries, circuit breakers, backpressure)
LLM integration - Hands-on experience integrating LLMs via SDKs and APIs; context engineering, structured outputs, tool calling, and model routing
RAG & retrieval - Experience with embedding pipelines, vector databases (e.g., Milvus, Qdrant, Pinecone), chunking strategies, and relevance tuning
Evaluation & observability - Experience designing LLM and agent evals, monitoring AI system quality, and building observability for non-deterministic systems
Nice to Have:
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, container orchestration, deploying and operating production services
Experience with MCP or similar tool-use protocols for agent-to-service communication
Hands-on ML experience - המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603620
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required AI Engineering Team Lead - Applied AI Engineering Group
Tel Aviv Full-time
The Dream Job
It starts with you - a technical leader driven to build both the agentic AI platform and the engineering team behind it. You care about backend quality, platform reliability, and growing engineers through real ownership. We are AI-first across the board - every team builds and operates agents. You'll set the technical direction for the platform that makes this possible: agent orchestration frameworks, LLM gateways, evaluation infrastructure, tool-calling systems, and retrieval pipelines. Without this platform, agents don't ship - you own the layer that turns AI research into Sovereign AI products, deployed across cloud and on-prem environments. You stay close enough to the codebase to debug production incidents, unblock your engineers, and make sound architecture calls.
If you want to make a meaningful impact, join our mission and lead the team that builds the agentic AI platform driving Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Architect and evolve the AI platform - agent orchestration, LLM gateways, context engineering pipelines, evaluation infrastructure, tool-calling systems, and retrieval pipelines - through RFCs, prototypes, and design reviews.
Lead and grow a small team of AI Engineers building the agent framework, production backend services, and AI platform infrastructure - hire, mentor, pair on hard problems, and raise the bar through hands-on code and design reviews.
Contribute to critical systems, debug production incidents, and maintain enough codebase context to make sound technical calls.
Own reliability across AI and agent services - set and enforce SLAs, build observability for non-deterministic systems, and harden tool execution environments for cost and security.
Set the standard for AI engineering practices - agent testing strategies, evaluation frameworks with human-in-the-loop oversight, retrieval quality benchmarks, and CI/CD for AI systems.
Work closely with ML Platform, Data Platform, DevOps, Data Science, and Product teams across the Applied AI Engineering group - ensure the AI platform evolves to serve teams building agentic workflows across the organization.
Measure and improve developer experience - deploy friction, onboarding time, CI turnaround - as seriously as system performance.
Requirements:
6+ years in backend software engineering, with 4+ years focused on production systems that integrate AI/ML models or LLMs.
2+ years leading an engineering team - hiring, mentoring, conducting design reviews, and shipping alongside your team.
Engineering craft - Strong Python, Go, or Java, system architecture, API design, testing, and secure coding practices.
Agentic systems & LLM integration - Deep understanding of agent orchestration, tool-use architectures, LLM integration patterns, context engineering, and frameworks like LangGraph or similar, or custom-built equivalents
Backend & platform engineering - Experience building and operating production APIs, services, and platform infrastructure at scale; comfortable working with relational databases, message queues, and event-driven architectures
RAG & retrieval - Experience with production RAG pipelines, vector databases, embedding systems, and retrieval quality
Evaluation & observability - Experience building LLM and agent eval infrastructure, monitoring AI quality, and observability for non-deterministic systems
Nice to Have:
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, service architecture, incident management
Experience with MCP or similar tool-use protocols for agent-to-service communication
Hands-on ML experience - model training, fine-tuning, or working directly with ML pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603446
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior AI Engineer to join our Cybersecurity team in Tel Aviv. You will design, build, and productionize LLM-powered applications, multi-agent systems, and MLOps infrastructure that power our company's next-generation cybersecurity capabilities. This is a high-impact, hands-on role at the intersection of applied AI, agentic systems, and network securit
What You'll Do
Design and develop LLM-powered security features and internal AI tools, including RAG pipelines, multi-agent workflows, and prompt-engineered systems tailored for cybersecurity use cases
Architect and operate multi-agent systems in production - including agent orchestration, inter-agent communication, task delegation, and failure handling at scale
Build robust agent monitoring and observability pipelines: tracing agent execution, detecting drift or failure, alerting on anomalous behavior, and maintaining agent reliability SLAs
Build and maintain scalable MLOps infrastructure: model serving, evaluation frameworks, experiment tracking, and CI/CD for ML models
Work with internal datasets (network telemetry, security logs, threat intelligence) to fine-tune and adapt foundation models for domain-specific detection and response tasks
Partner with the Cybersecurity, R&D, and infrastructure teams to define AI-driven security features and deliver them end-to-end
Establish best practices for model observability, safety, and responsible AI deployment within the organization
Stay current with the fast-moving LLM/GenAI and agentic AI ecosystem and evaluate emerging frameworks, models, and tools for adoption.
Requirements:
Must-Have
5-8 years of software engineering experience, with at least 2-3 years focused on AI/ML engineering
Hands-on experience building production-grade LLM applications - RAG, agents, tool use, or fine-tuning
Proven experience designing and running multi-agent systems in production: orchestration patterns, agent state management, retries, and graceful degradation
Experience monitoring and observing AI agents in production - execution tracing, latency tracking, failure detection, and alerting (e.g., LangSmith, Arize, custom observability stacks)
Proficiency with agentic frameworks: LangChain, LangGraph, and/or AWS Bedrock AgentCore
Strong Python skills and comfort working across the full AI application stack
Experience designing and operating MLOps pipelines (model versioning, deployment, monitoring)
Solid understanding of transformer-based models, embeddings, and vector databases (e.g., Pinecone, Weaviate, pgvector)
Comfortable working in cloud environments (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes)
Strong problem-solving skills and ability to work autonomously in a fast-paced environment
Nice-to-Have
Background in cybersecurity - threat detection, SIEM, SOC automation, or security data analysis - a significant plus for this role
Familiarity with networking concepts (SDN, cloud-native networking, BGP, telemetry)
Experience with model evaluation and benchmarking (LLM-as-judge, RAGAS, or custom eval harnesses)
Exposure to MCP (Model Context Protocol) for tool-augmented agentic workflows
Prior experience in enterprise SaaS, networking, or telecom domains
Publications, open-source contributions, or projects in the LLM/GenAI or agentic AI space
Our Stack
Python PyTorch OpenAI / Anthropic APIs LangChain LangGraph AWS Bedrock AgentCore LangSmith Kubernetes Kafka Elasticsearch AWS PostgreSQL GitHub Jira Confluence.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595648
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 17 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Machine Learning Engineer to lead our transition from on-demand, third-party LLM APIs to a fully self-hosted, scalable model ecosystem.
Our core product is an advanced, agentic support chatbot capable of complex reasoning, API tool calling, database lookups, and orchestrating specialized Small Language Models (SLMs) for targeted NLP tasks. As we scale, our current deployment infrastructure (AWS SageMaker) is becoming unsustainable. You will be responsible for architecting, deploying, and optimizing an infrastructure capable of supporting 50 to 100 distinct models ranging from 100M to 70B parameters.
What Youll Do:
Inference Optimization: Deploy and manage large-scale models using high-performance inference engines (like vLLM) to ensure low latency and high throughput for our agentic chatbot.
Agentic Workflows: Develop and refine the chatbot's agentic capabilities, ensuring reliable tool-use, routing, and interactions between massive LLMs and specialized SLMs.
Model Fine-Tuning: Design and execute fine-tuning strategies to improve model accuracy on specific domain tasks and tool-calling execution.
Rigorous Evaluation: Build comprehensive offline and online evaluation frameworks to constantly measure model performance and business impact through structured A/B testing.
Requirements:
Core Engineering & AI Frameworks
Strong proficiency in Python and Bash scripting.
Deep experience with PyTorch and the Hugging Face ecosystem.
Experience using AI coding assistants natively in the terminal, specifically Claude Code, to accelerate development workflows.
LLMs, Inference & Agents
Proven experience deploying models using vLLM, TGI, or similar high-performance inference servers.
Strong fundamental understanding of LLM architectures, attention mechanisms, and generation parameters.
Hands-on experience building Agentic systems (ReAct, function/tool calling, RAG).
Expertise in fine-tuning strategies (e.g., SFT, RLHF, DPO) and parameter-efficient techniques (PEFT/LoRA).
Statistics & Model Evaluation
Offline Metrics: Deep understanding of classification/summarization metrics (Precision, Recall, F1, AUC) and retrieval metrics (MRR, NDCG, Precision/Recall @ k).
Online Metrics & A/B Testing: Strong statistical foundation to design and analyze A/B tests safely, including the use of t-tests, Mann-Whitney U tests, and bootstrapping techniques.
Bonus Points
Containerization & Orchestration: Experience with Ray for orchestrating large-scale model deployments across multi-GPU clusters.
Model Quantization: Experience with memory optimization techniques like AWQ, GPTQ, GGUF, or FlashAttention to fit 70B models efficiently onto hardware.
API Development: Proficiency in building robust, asynchronous microservices using FastAPI to serve model requests.
Knowledge of Data Engineering principles: dataset collection, cleaning, processing, and scalable storage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8618171
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior ML Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the ML platform that turns research into reliable, production-grade intelligence. You care about reproducibility, low-friction experimentation, and infrastructure that earns the trust of the scientists and researchers who depend on it daily. You'll architect and ship our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - turning models into production capabilities across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments.
If you want to make a meaningful impact, join our mission and build the ML platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Build and operate ML training infrastructure - distributed training pipelines, compute scheduling, and reproducible experiment workflows that data scientists rely on daily.
Own model serving and inference systems - packaging, deployment, autoscaling, A/B testing, canary rollouts, and latency/cost optimization for production models.
Run feature stores, model registries, and dataset versioning - enabling self-serve feature engineering, model lineage, and reproducible experiments across teams.
Build experiment tracking and evaluation infrastructure - automated evals, comparison dashboards, drift detection, and monitoring that give teams visibility into model behavior and performance.
Build and maintain production pipelines for training, fine-tuning workflows, and serving domain models - owning reliability, reproducibility, and scale.
Build and maintain the monitoring and observability layer - model performance tracking, data and prediction drift detection, data quality validation, and alerting.
Improve performance and cost across the ML stack - training throughput, inference latency, batch vs. real-time tradeoffs, and compute cost management.
Ship shared tooling - libraries, templates, CI/CD for models, IaC, and runbooks - while collaborating across Data Platform, AI, Data Science, Engineering, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
5+ years in software engineering, with 2+ years focused on ML infrastructure, MLOps, or data-intensive systems
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603632
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 17 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
The Vision
We believe that Software Engineering is the highest-leverage human workflow in the company. In an AI-native world, the bottleneck is no longer how fast we can type, but how quickly we can validate, iterate, and deploy. Engineering excellence is our ultimate competitive advantage. As a Senior Software Engineer for AI Assisted Engineering Support, you will build the "intelligence layer" for our development teams. You aren't just building tools; you are building agents that understand our codebase, our standards, and our intent. Your goal is to move the company toward a "Demo to Prod" reality where AI handles the boilerplate, the testing, and the initial PR generation, leaving humans to focus on architecture and high-level logic.
The Mission: Agentic Engineering
Consistent with our "Human Centric Workflows" philosophy, you will treat LLMs as programmable functions grounded in our specific codebase. You will build the specialized assistants that integrate into our IDEs and CI/CD pipelines to unblock developers, automate reviews, and ensure that "gold-standard" code is the default, not the exception.
What Youll Do:
Build AI Engineering Assistants: Develop and scale the internal agents that assist with code generation, automated refactoring, and documentation.
Enable the "Demo to Prod" Pipeline: Work on the technical implementation of tools that allow for one-shot workflows-moving from a prototype or a spec directly to a production-ready Pull Request.
Deterministic Engineering Evals: Drive quality by prioritizing determinism. You will build the serialization formats and retrieval systems (RAG) that give engineering agents the exact context they need from our repositories to be precise and useful.
Automated Code Stewardship: Create agents that help maintain our "Immune System"-automated drift detection, visual regression testing, and security scanning for AI-assisted contributions.
Systemic Optimization: Implement a culture of rigour. You will run experiments across different models and tools, using engineering-specific benchmarks to ensure our assistants are actually increasing velocity and quality.
Global Collaboration: Partner with the US and Israel-based teams to integrate your assistants into our global developer platform and telemetry frameworks.
Requirements:
The "Developer's Developer": You are a Senior Software Engineer who loves building tools for other engineers. You understand the pain points of the modern development lifecycle and want to solve them with AI.
An Agentic Systems Specialist: You are experienced in building agentic flows (using state machines or agent frameworks) and know how to balance agency (allowing the tool to solve the problem) with precision (ensuring it doesn't break prod).
The "Data-First" Builder: You recognize that an AI assistant is only as good as the context it receives. You are skilled at data engineering and know how to serialize complex codebases for LLM consumption.
The "Moat" Builder: You see engineering velocity as a strategic differentiator. You are driven by the goal of making our engineering org so fast and reliable that we out-innovate the market.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8618137
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Machine Learning Engineer - AI Coding Agents & LLM Infrastructure
Tel Aviv
Full-time
A bit about us:
We are redefining how software gets built. Trusted by over 1M+ developers, we build AI-first developer experiences powered by state-of-the-art coding agents and code reasoning models. With support for 30+ programming languages and 15+ IDEs, our platform is pushing the limits of LLM-based software engineering - enabling teams to design, write, review, and ship code faster than ever. Were committed to advancing code-native AI models, multi-agent systems, agent orchestration frameworks, memory, and autonomous dev tooling to empower developers at every step of the software lifecycle.
Were growing fast, and our team is passionate about pushing AI engineering to new heights - solving complex problems in LLM training, inference optimization, reasoning, and agent orchestration at scale.
About the Role:
As a Machine Learning Engineer, youll work on cutting-edge
code-focused LLMs and AI agent systems
that power our next-generation developer platform. Youll be at the center of research, model training, and productionization of intelligent systems that understand software deeply, collaborate with developers, and help automate engineering workflows end-to-end. Your work will immediately impact millions of engineers worldwide.
Responsibilities:
Push LLM Innovation: Research, design, and fine-tune domain-specific LLMs for code generation, refactoring, debugging, and multi-turn reasoning.
Agent-Oriented Development: Build multi-agent coding systems that integrate retrieval-augmented generation (RAG), code execution, testing, and tool use to create autonomous, context-aware coding workflows.
Production-Grade AI: Own the training-to-inference pipeline for large code models-optimize inference with quantization, distillation, and caching techniques.
Rapid Experimentation: Prototype and validate ideas quickly; leverage reinforcement learning, human feedback, and synthetic data generation to push accuracy and reasoning.
Cross-Functional Collaboration: Partner with product, engineering, and design teams to ship AI-powered features that help developers focus on high-impact work.
Scale the Platform: Contribute to distributed training, scalable serving systems, and GPU/TPU-efficient architectures for ultra-low-latency developer tools.
Requirements:
2+ years of hands-on experience designing, training, and deploying machine-learning models
M.Sc. or higher in Computer Science / Mathematics / Statistics or equivalent from a university, or B.Sc. with strong hands-on ML experience
Practical experience with Natural Language Processing (NLP) and LLMs
Experience with data acquisition, data cleaning, and data pipelines
A passion for building products and helping people, both customers and colleagues
All-around team player, fast, self-learning individual
Nice to have:
3+ years of development experience with a passion for excellence
Experience building AI coding assistants, code reasoning models, or dev-focused LLM agents.
Familiarity with RAG, function-calling, and tool-using LLMs.
Knowledge of model optimizations (quantization, distillation, LoRA, pruning).
Startup or product-driven ML experience, especially in high-scale, latency-sensitive environments.
Contributions to open-source AI or developer tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608813
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/03/2026
Job Type: Full Time
We're looking for a Senior AI/MLOps Engineer to join a group that specializes in Security and Networking, and specifically ML, AI and agent development. As a Senior AI/MLOps Engineer, youll build and maintain the infrastructure, tools and processes necessary to support the AI lifecycle in a production environment. You will collaborate closely with data scientists, software engineers, security architects and DevOps teams to ensure smooth deployment, modeling and optimization of AI models. This role involves creative problem solving alongside engineering teams, and is pivotal for the continued success of AI networking security.

What youll be doing:

Developing, improving and optimizing scalable infrastructure for handling and deploying security and networking AI models and agents in production, ensuring high availability, scalability, reproducibility, and performance.

Optimizing AI models and agents for performance, scalability, and resource utilization, considering factors such as latency, efficiency, and cost.

Monitoring and deploying agentic systems, LLMs, and ML models in production.

Designing and implementing frameworks/pipelines for AI training, inference, and experimentation.

Collaborating closely with data scientists, security architects and software engineers to operationalize and deploy AI models and agents, including packaging and integration with existing systems. Participate in developing and reviewing code, design documents, use case reviews, and test plan reviews.

Collaborating with DevOps teams to integrate pipelines and workflows into the CI/CD process, ensuring flawless deployments and rollbacks.

Building and maintaining monitoring and alerting systems to proactively identify and resolve issues relating to quality, performance and infrastructure.

Implementing access controls, authentication mechanisms, and encryption standards for AI models and data.

Documenting guidelines, and standard operating procedures for MLOps/AI processes and sharing knowledge with the wider team.

Develop proof-of-concepts for new features.
Requirements:
What we need to see:

BSc/MSc in CS/CE or related field (or equivalent experience).

Strong background in AI with experience deploying and monitoring AI/ML models, LLMs and agents to production systems at scale, including distributed and multi-node environments - at least 5 years of experience.

Proficiency in programming languages such as Python, Java, or Scala, along with experience in using ML/AI frameworks and libraries (e.g. TensorFlow, PyTorch).

Proficiency in microservices architecture, container orchestration, cloud platforms, and scalable infrastructure for training and inference workloads.

Knowledge of inference optimization techniques.

Understanding of build infrastructure and CI/CD tools and practices (e.g. GitLab, GitHub Actions, Jenkins).

You are detail-oriented and care deeply about robust, well tested, high-performance code in production environments.

You are proactive, take full ownership of your deliverables, have a can-do approach, and excellent communication and collaboration skills, able to work effectively in multifunctional teams.

Ways to stand out from the crowd:

Knowledge of network protocols and Linux internals.

Security and networking background, with knowledge of security protocols, network architectures, firewalls, intrusion detection systems, and other relevant security and networking concepts.

Experience deploying and optimizing generative models and agents.

Knowledge of network security principles and practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8586605
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior AI Engineer to join a strong and dynamic AI Engineering team. We are the focal point for AI initiatives, striving to constantly bring innovation and leverage AI capabilities across all company teams and products.
Today, AI is central to how we operate, across the entire organization. It allows us to move fast and release features at a rapid pace, empowers non-technical Forterians to utilize AI tools for increased efficiency, and provides the backdrop for much of the innovation currently occurring in the company.
If this kind of working environment sounds exciting to you, if you understand that Engineering is about building the most effective and elegant solution within a given set of constraints - consider applying for this position.
Why should you join us?
This is a great opportunity to be at the cutting edge of the AI revolution, helping to shape and build the AI platform for the future. Together, well build infrastructure for autonomous and interactive agents, enact AI guardrails and evaluation frameworks to ensure performance and safety, and implement state-of-the-art
AI and Agentic patterns.
This role presents a unique opportunity to enter the AI domain. For those with some experience in AI infrastructure, it offers the chance to grow within a team that is evolving us from the AI experimentation phase to building and leveraging AI-powered products.
What you will be doing:
Design, build, and maintain reusable AI capabilities - including models, tools, APIs, and platforms that power both internal and customer-facing solutions.
Develop and maintain our internal MCP server that easily and securely exposes our vast data stores to AI agents.
Create and implement robust evaluation frameworks and AI guardrails to safeguard our value and ensure model reliability.
Establish deep expertise and sustainable AI engineering practices.
Promote AI readiness and track adoption across the company to build lasting impact.
Build and optimize RAG (Retrieval-Augmented Generation) systems.
Take full ownership of projects: from gathering requirements from non-technical internal users to development, deployment, and operation.
Act as a consultant and advocate for AI engineering, helping other teams leverage the platforms and tools you build.
Partner with teams across to accelerate AI adoption and productization efforts.
Requirements:
5+ years of strong backend and server-side development experience, building complex, highly scalable systems.
Proven experience with at least one general-purpose language (preferably Python, but not a must).
Strong product management skills, with the ability to gather and refine requirements from non-technical internal users.
A strong sense of ownership, with some DevOps experience and a willingness to develop, deploy, and run projects end-to-end.
Strong familiarity with AI coding tools like Copilot, Cursor, or similar.
Experience working with public clouds (AWS / GCP / Azure).
Fluent in written and spoken English.
Itd be really cool if you also:
Are familiar with agentic coding tools like Claude code or Copilot CLI.
Have familiarity with Strands Agents (or similar agentic technologies), RAGs, and Bedrock.
Have experience with MCP (Model Context Protocol).
Are comfortable in a containerized environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588943
סגור
שירות זה פתוח ללקוחות VIP בלבד