דרושים » תוכנה » Applied Data Scientist

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
3 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an Applied Data Scientist to join one of our product squads. Youll design, build, and deploy data-driven solutions that combine machine learning, statistical methods, and SQL/rules-based decision logic to power autonomous supply chain intelligence platform. Youll work closely with data science, engineering, product, and supply chain experts and own solutions end-to-end-from problem definition to production monitoring and iteration.
Responsibilities:
Deliver data science solutions end-to-end within a product squad: problem framing → data prep/labeling → modeling → deployment support → monitoring → iteration
Build, train, and improve ML models for supply chain use cases (e.g., inventory risk prediction, demand anomalies, root-cause analysis)
Define success metrics and evaluation plans with support from senior DS/PM; run error analysis and document learnings
Work with stakeholders to create and maintain ground truth (label definitions, labeling workflows, QA checks, feedback loops)
Implement hybrid decision logic by combining ML outputs with statistical methods and SQL/rules-based logic for robustness and explainability
Analyze large, multi-source operational datasets to identify trends, anomalies, and drivers of performance
Collaborate with software engineers to productionize solutions (batch and/or real-time), including testing, logging, and basic monitoring
Monitor deployed models/rules, investigate performance issues (data quality, drift, edge cases), and iterate based on outcomes
Contribute to team practices: reproducible notebooks/code, documentation, and experiment tracking
Requirements:
MSc in Computer Science, Data Science, Mathematics, Statistics, Engineering, (or equivalent practical experience)
3+ years of experience in applied data science / ML in a product environment (or equivalent practical experience)
Strong Python skills and experience with common DS libraries (pandas, NumPy, scikit-learn); familiarity with PyTorch/TensorFlow is a plus
Solid SQL skills (joins, aggregations, window functions) and comfort working with production data in a warehouse/lake
Experience building predictive or anomaly detection models and performing rigorous evaluation (baselines, cross-validation where relevant, error analysis)
Ability to translate business questions into measurable metrics and a clear analytical plan (with guidance when needed)
Experience working with messy real-world data: data validation, debugging pipelines, and collaborating on labeling/ground truth
Familiarity with taking models to production: packaging/hand-off to engineers, versioning, and understanding monitoring/drift concepts
Strong communication and collaboration skills with engineering, product, and domain experts; comfortable receiving feedback and iterating fast
Nice to Have (Advantages)
Experience designing or deploying agentic workflows, AI agents, or multi-step decision systems
Cloud + Docker + production engineering practices (CI/CD, testing, monitoring)
Experience publishing academic or applied research (peer-reviewed papers, conference publications, technical whitepapers, or open research work)
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608560
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Scientist
About the Role
The Data Science department plays a pivotal role in our company, generating value to us by developing algorithms and analytical production-grade solutions. We leverage advanced techniques and algorithms to provide maximum value from data in all shapes and sizes (such as classification models, NLP, anomaly detection, graph theory, deep learning, and more). As a Data Scientist, you will assume the classic data-science role of an end-to-end project development and implementation practitioner. Being part of the team requires a mix of hard quantitative and analytical skills, solid background in statistical modeling and machine learning, a technical data-savvy nature, along with a passion for problem-solving and a desire to drive data-driven decision-making.
What You'll Be Doing
Data Exploration and Preprocessing: Collect, clean, and transform large, complex data sets from various sources to ensure data quality and integrity for analysis
Statistical Analysis and Modeling: Apply statistical methods and mathematical models to identify patterns, trends, and relationships in data sets, and develop predictive models
Machine Learning: Develop and implement machine learning algorithms, such as classification, regression, clustering, and deep learning, to solve business problems and improve processes
Feature Engineering: Extract relevant features from structured and unstructured data sources, and design and engineer new features to enhance model performance
Model Development and Evaluation: Build, train, and optimize machine learning models using state-of-the-art techniques, and evaluate model performance using appropriate metrics
Data Visualization: Present complex analysis results in a clear and concise manner using data visualization techniques, and communicate insights to stakeholders effectively
Collaborative Problem-Solving: Collaborate with cross-functional teams, including product managers, data engineers, software developers, and business stakeholders to identify data-driven solutions and implement them in production environments
Research and Innovation: Stay up to date with the latest advancements in data science, machine learning, and related fields, and proactively explore new approaches to enhance the company's analytical capabilities.
Requirements:
B.Sc (M.Sc is a plus) in Computer Science, Mathematics, Statistics, or a related field
3+ years of proven experience designing and implementing machine learning algorithms and successfully deploying them to production.
Strong understanding and practical experience with various machine learning algorithms.
Proficiency in Python, Experience with SQL and data manipulation tools (e.g., Pandas, NumPy) to extract, clean, and transform data for analysis
Solid foundation in statistical concepts and techniques, including hypothesis testing, regression analysis, time series analysis, and experimental design
Strong analytical and critical thinking skills to approach business problems, formulate hypotheses, and translate them into actionable solutions
Proficiency in data visualization libraries, to create meaningful visual representations of complex data
Excellent written and verbal communication skills to present complex findings and technical concepts to both technical and non-technical stakeholders
Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environment
Advantages:
Experience in the fraud domain
Experience with Airflow, CircleCI, PySpark, Docker and K8S.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8589824
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Analyst with strong Python and engineering capabilities to join our Data team.

This is not a traditional analytics role - it combines data analysis with hands-on development. You will work extensively with Python as part of your day-to-day work, building scalable data processes, evaluation frameworks, and classification logic that directly impacts our product.

Our team works on large-scale data systems, leveraging ML capabilities such as NER and LLMs, alongside robust engineering and analytics to deliver insights across millions of files and tables daily.


Responsibilities
Design and implement data classification logic to expand coverage of sensitive data types, owning the process end-to-end - from research and analysis to production deployment
Develop Python-based workflows and tools for data processing, evaluation, and automation
Build and maintain data pipelines and monitoring systems to track product performance and data quality
Create automated frameworks to evaluate and benchmark model performance
Collaborate closely with R&D and take part in discussions around AI/ML solutions and product direction
Requirements:
2+ years of experience in a data-focused role within a B2B SaaS company
Strong hands-on experience with Python - writing code as part of daily work
Experience with SQL and working with large-scale data
Experience building data pipelines / ETL processes (Airflow, Dagster, or similar)
Familiarity with Spark or distributed data processing - an advantage
Understanding of software development best practices (Git, CI/CD, testing)
Degree in a quantitative field (Computer Science, Engineering, Mathematics, Statistics)
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8604543
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Analyst to join the Data for AI team. This is a hands-on, customer-facing role focused on working with leading AI companies to turn real-world data into inputs that support model development and evaluation.
Youll collaborate closely with external AI teams and internal engineering and product partners to deliver data-driven solutions for specific AI use cases. The work is fast-paced, technical, and often open-ended, requiring comfort with large datasets, ambiguous requirements, and end-to-end ownership.
What does the day-to-day looks like:
Own end-to-end delivery of data solutions for AI use cases, from understanding model and product requirements to analysis, implementation, quality, and automation
Work hands-on with large, raw datasets to create high-quality data inputs that support model training, evaluation, and iteration
Apply strong quantitative analysis and data exploration skills to assess coverage, quality, and behavior of data used in AI systems
Build scripts, analyses, and reusable components in Python and SQL to support scalable and repeatable workflows
Collaborate closely with Engineering to ensure solutions are reliable, scalable, and production-ready
Partner directly with external AI teams and internal stakeholders to translate open-ended questions into concrete data outputs.
Requirements:
4+ years of hands-on experience working with large-scale data using SQL and Spark or BigQuery
Strong Python skills for data analysis, scripting, and building reusable workflows
Experience working with raw, imperfect data and turning it into reliable, high-quality outputs
Strong analytical and problem-solving skills, with the ability to break down open-ended or ambiguous requirements
Ability to take end-to-end ownership of data projects, from exploration to delivery
Some hands-on experience with LLM-based systems, such as running inference via APIs, experimenting with prompts, or participating in basic evaluation or testing workflows
Clear communication skills in English and experience working directly with external stakeholders
Nice to have:
Deeper hands-on experience with LLMs in production or experimentation, for example prompt engineering, batch inference, or structured evaluation using APIs such as OpenAI, Anthropic, or similar providers
Familiarity with agent frameworks or orchestration layers (for example LangChain, LlamaIndex)
Experience with LLM evaluation or monitoring workflows, including offline evals, prompt regression testing, or tools such as LangSmith, Weights & Biases, TruLens, or Ragas
Experience experimenting with open-source or local models (for example via Ollama, vLLM, or Hugging Face tooling)
Familiarity with cloud-based data infrastructure, including AWS.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8590074
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This job will design, develop, and implement machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences.
Job Description:
Essential Responsibilities:
Develop and optimize machine learning models for various applications.
Preprocess and analyze large datasets to extract meaningful insights.
Deploy ML solutions into production environments using appropriate tools and frameworks.
Collaborate with cross-functional teams to integrate ML models into products and services.
Monitor and evaluate the performance of deployed models.
Requirements:
Minimum Qualifications:
3+ years relevant experience and a Bachelors degree OR Any equivalent combination of education and experience.
Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn.
Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment.
Several years of experience in designing, implementing, and deploying machine learning models.
Additional Responsibilities And Preferred Qualifications
Deep expertise in Machine Learning & Statistics: Strong foundations in statistical modeling, supervised/unsupervised learning, model validation, experimentation, and performance evaluation.
End-to-end ML model development experience: Proven ability to design, research, build, validate, and deploy production-grade ML models, including monitoring and lifecycle management.
NLP & LLM proficiency: Hands-on experience developing and fine-tuning NLP models and Large Language Models (LLMs), including prompt engineering, retrieval-augmented generation (RAG), and model optimization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599199
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Senior ML Research Engineer
Israel: Tel Aviv/ Hybrid
R&D | Full Time | Job Id: 24793
Your Impact & Responsibilities:
As a Senior ML Research Engineer, you will be responsible for the end-to-end lifecycle of large language models: from data definition and curation, through training and evaluation, to providing robust models that can be consumed by product and platform teams.
Own training and fine-tuning of LLMs / seq2seq models: Design and execute training pipelines for transformer-based models (encoder-decoder, decoder-only, retrievalaugmented, etc.), and fine-tune open-source LLMs -specific data (security content, logs, incidents, customer interactions).
Apply advanced LLM training techniques such as instruction tuning, preference / contrastive learning, LoRA / PEFT, continual pre-training, and domain adaptation where appropriate.
Work deeply with data: define data strategies with product, research and domain experts; build and maintain data pipelines for collecting, cleaning, de-duplicating and labeling large-scale text, code and semi-structured data; and design synthetic data generation and augmentation pipelines.
Build robust evaluation and experimentation frameworks: define offline metrics for LLM quality (task-specific accuracy, calibration, hallucination rate, safety, latency and cost); implement automated evaluation suites (benchmarks, regression tests, redteaming scenarios); and track model performance over time.
Scale training and inference: use distributed training frameworks (e.g. DeepSpeed, FSDP, tensor/pipeline parallelism) to efficiently train models on multi-GPU / multi-node clusters, and optimize inference performance and cost with techniques such as quantization, distillation and caching.
Collaborate closely with security researchers and data engineers to turn domain knowledge and threat intelligence into high-value training and evaluation data, and to expose your models through well-defined interfaces to downstream product and platform teams.
Requirements:
5+ years of hands-on work in machine learning / deep learning, including 3+ years focused on NLP / language models.
Proven track record of training and fine-tuning transformer-based models (BERT-style, encoder-decoder, or LLMs), not just consuming hosted APIs.
Strong programming skills in Python and at least one major deep learning framework (PyTorch preferred; TensorFlow).
Solid understanding of transformer architectures, attention mechanisms, tokenization, positional encodings, and modern training techniques.
Experience building data pipelines and tools for large-scale text / log / code processing (e.g. Spark, Beam, Dask, or equivalent frameworks).
Practical experience with ML infrastructure, such as experiment tracking (Weights & Biases, MLflow or similar), job orchestration (Airflow, Argo, Kubeflow, SageMaker, etc.), and distributed training on multi-GPU systems.
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to own research and engineering projects end-to-end: from idea, through prototype and controlled experiments, to models ready for integration by product and platform teams.
Good communication skills and the ability to work closely with non-ML stakeholders (security experts, product managers, engineers).
Nice to have:
Experience with RLHF / preference optimization, safety alignment, or other humanfeedback-in-the-loop approaches to training LLMs.
Experience with retrieval-augmented generation (RAG), dense retrieval, vector databases, and embedding training.
Background in security / cyber domains such as threat detection, malware analysis, logs, or SOC tools.
Experience with multilingual models (e.g., Hebrew + English) and cross-lingual training.
Experience in a product environment where models must meet reliability, scale, and cost constraints.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597461
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Scientist to take full ownership of high-impact ML problems within our DSP platform.
You will work on core bidding, budget optimization, and prediction systems that operate at massive scale and strict latency constraints. This role combines deep modeling expertise, experimentation rigor, and strong product intuition.
You will be expected not only to build models - but to define problems, challenge assumptions, and drive measurable business impact.
What Youll Do:
Own end-to-end ML projects: problem definition → research → modeling → offline validation → production deployment → online A/B testing → impact analysis.
Develop and improve models for: Bid optimization , Conversion rate / pLTV prediction, Budget pacing and allocation, Auction dynamics & win-rate modeling.
Analyze large-scale, high-dimensional auction and user-level datasets to extract actionable insights.
Design robust feature engineering pipelines across behavioral, contextual, and advertiser-level signals.
Improve model performance under real-time constraints (low latency, high throughput).
Lead experimentation design and statistical validation of online tests.
Collaborate closely with engineering and product to translate research into scalable production systems.
Requirements:
4+ years of hands-on experience in Data Science / Machine Learning in production environments.
Proven track record of shipping ML models that created measurable business impact.
Experience in real-time systems, online experimentation, or large-scale optimization problems.
Technical Skills:
Strong Python skills with ML stack (NumPy, Pandas, Scikit-learn, PyTorch/TensorFlow).
Advanced SQL and experience working with large-scale datasets.
Deep understanding of:
Supervised learning (classification/regression)
Model evaluation & calibration
Feature engineering at scale
Hyperparameter tuning & regularization
Strong statistical foundation and experimental design knowledge.
Advantage (Highly Preferred):
Experience in AdTech / DSP / RTB environments.
Knowledge of auction theory or bidding strategies.
Experience with large-scale distributed data systems (Spark, Airflow, etc.).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600842
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior ML Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the ML platform that turns research into reliable, production-grade intelligence. You care about reproducibility, low-friction experimentation, and infrastructure that earns the trust of the scientists and researchers who depend on it daily. You'll architect and ship our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - turning models into production capabilities across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments.
If you want to make a meaningful impact, join our mission and build the ML platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Build and operate ML training infrastructure - distributed training pipelines, compute scheduling, and reproducible experiment workflows that data scientists rely on daily.
Own model serving and inference systems - packaging, deployment, autoscaling, A/B testing, canary rollouts, and latency/cost optimization for production models.
Run feature stores, model registries, and dataset versioning - enabling self-serve feature engineering, model lineage, and reproducible experiments across teams.
Build experiment tracking and evaluation infrastructure - automated evals, comparison dashboards, drift detection, and monitoring that give teams visibility into model behavior and performance.
Build and maintain production pipelines for training, fine-tuning workflows, and serving domain models - owning reliability, reproducibility, and scale.
Build and maintain the monitoring and observability layer - model performance tracking, data and prediction drift detection, data quality validation, and alerting.
Improve performance and cost across the ML stack - training throughput, inference latency, batch vs. real-time tradeoffs, and compute cost management.
Ship shared tooling - libraries, templates, CI/CD for models, IaC, and runbooks - while collaborating across Data Platform, AI, Data Science, Engineering, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
5+ years in software engineering, with 2+ years focused on ML infrastructure, MLOps, or data-intensive systems
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603632
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are always looking for exceptional talent to join us on the journey!
Your Mission:
As an MLOps Engineer, your mission is to design, build, and operate the platforms that power our machine learning and generative AI products spanning real-time use cases such as large-scale fraud scoring, MCP & agentic workflows support. Youll create reliable CI/CD for models and Agents, robust data/feature pipelines, secure model serving, and comprehensive observability. You will also support our agentic AI ecosystem and Model Context Protocol (MCP) services so that models can safely use tools, data, and actions across.
You will partner closely with Data Scientists, Data/Platform Engineers, Product, and SRE to ensure every model from classic ML to LLM/RAG agents moves from prototype to production with strong reliability, governance, cost efficiency, and measurable business impact.
Responsibilities:
Operate & Develop ML/LLM platforms on Kubernetes + cloud (Azure; AWS/GCP ok) with Docker, Terraform, and other relevant tools
Manage object storage, GPUs, and autoscaling for training & low-latency model serving
Manage cloud environment, networking, service mesh, secrets, and policies to meet PCI-DSS and data-residency requirements
Build end-to-end CI/CD for models/agents/MCP tooling (versioning, tests, approvals)
Deliver real-time fraud/risk scoring & agent signals under strict latency SLOs.
Maintain MCP servers/clients: tool/resource definitions, versioning, quotas, isolation, access controls
Integrate agents with microservices, event streams, and rule engines; provide SLAs, tracing, and on-call runbooks
Measure operational metrics of ML/LLM (latency, throughput, cost, tokens, tool success, safety events)
Enforce governance: RBAC/ABAC, row-level security, encryption, PII/secrets management, audit trails.
Partner with DS on packaging (wheels/conda/containers), feature contracts, and reproducible experiments.
lead incident response and post-mortems.
Drive FinOps: right-sizing, GPU utilization, batching/caching, budget alerts.
Requirements:
4+ years in DevOps/MLOps/Platform roles building and operating production ML systems (batch and real-time)
Strong hands-on with Kubernetes, Docker, Terraform/IaC, and CI/CD
Practical experience with Spark/Databricks and scalable data processing
Proficiency in Python & Bash
Ability to operate DS code and optimize runtime performance.
Experience with model registries (MLflow or similar), experiment tracking, and artifact management.
Production model serving using FastAPI/Ray Serve/Triton/TorchServe, including autoscaling and rollout strategies
Monitoring and tracing with Prometheus/Grafana/OpenTelemetry; alerting tied to SLOs/SLAs
Solid understanding of PCI-DSS/GDPR considerations for data and ML systems
Experience with the Azure cloud environment is a big plus
Operating LLM/agent workloads in production (prompt/config versioning, tool execution reliability, fallback/retry policies)
Building/maintaining RAG stacks (indexing pipelines, vector DBs, retrieval evaluation, hybrid search)
Implementing guardrails (policy checks, content filters, allow/deny lists) and human-in-the-loop workflows
Experience with feature stores - Qwak Feature Store, Feast
A/B testing for models and agents, offline/online evaluation frameworks
Payments/fraud/risk domain experience; integrating ML outputs with rule engines and operational systems - Advantage
Familiarity with Databricks Unity Catalog, dbt, or similar tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595031
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 23 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a hands-on Applied AI Scientist to join our core R&D team and drive the development of next-generation AI systems for autonomous driving. This role sits at the intersection of applied research and deployment - you will go from reading papers to shipping production systems. You will work directly on our multi-layered autonomy architecture, with a primary focus on real-time predictive models for driving decisions.
A deep technical role for someone who thrives on turning cutting-edge research into real, working systems under hard constraints.
Responsibilities:
Own the research-to-deployment cycle for predictive driving models - from literature review and prototyping through to production integration
Design, implement, and iterate on real-time predictive models, including vision-language models, motion prediction models, and inverse reinforcement learning approaches (e.g., imitation learning, reward recovery)
Collaborate on higher-level reasoning systems, contributing to vision-language-action models that handle complex edge cases and long-horizon planning
Bridge cloud-scale training with edge deployment - work on model compression, quantization, speculative decoding, and efficient inference for embedded automotive platforms
Evaluate and integrate state-of-the-art techniques from the broader AI research community into our autonomy stack
Collaborate closely with internal R&D teams to unblock technical challenges, accelerate delivery, and raise the overall technical bar.
Requirements:
Ph.D. in Computer Science, Electrical Engineering, Machine Learning, Robotics, or a related field
Strong publication or deployment track record in one or more of: deep learning, computer vision, reinforcement learning, imitation learning, vision-language models, or motion prediction
Demonstrated ability to go from paper to working implementation - not just theory, but shipped systems
Strong coding skills in Python; experience with C++ is a plus
Familiarity with modern ML infrastructure: PyTorch, distributed training, model optimization
Solid mathematical foundations in probability, optimization, and statistics
Attributes:
Experience with CUDA or low-level GPU optimization
Hands-on work with model quantization, distillation, or efficient inference on edge devices
Background in real-time, safety-critical, or embodied AI systems (robotics, autonomous vehicles, drones, etc.)
Experience with small language models (SLMs) or on-device deployment of foundation models
Familiarity with driving datasets, simulation environments, or sensor fusion pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8614139
סגור
שירות זה פתוח ללקוחות VIP בלבד