דרושים » דאטה » Data Operations Team Lead

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
26/03/2026
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data & Machine Learning Engineer to operate at the intersection of data platform engineering and machine learning enablement. This role is responsible for building scalable, efficient, and reliable data systems while enabling Data Science and Analytics teams to develop and deploy ML-driven features.

You will take ownership of the data and ML infrastructure layer, ensuring that pipelines, storage models, and compute usage are optimized, while also shaping how data workflows and ML solutions are designed across the organization.


Responsibilities
Data Platform & Infrastructure

Design, build, and maintain scalable data pipelines and storage systems supporting analytics and ML use cases
Ensure compute and cost efficiency across pipelines, storage models, and processing workflows
Own and improve data orchestration, transformation, and serving layers (e.g., Spark, DBT, streaming/batch systems)
Build and maintain shared infrastructure components, including:
IO managers and data access abstractions
Integrations with DBT, Spark, and other data frameworks
Internal tooling to improve developer productivity and reliability
ML Enablement & Collaboration

Partner closely with Data Science to design and productions ML solutions for new features and research initiatives
Translate experimental models into robust, scalable production systems
Support feature engineering, training pipelines, and inference workflows
Help define best practices for ML lifecycle management (training, validation, deployment, monitoring)
Data Quality, Governance & Best Practices

Enforce best practices for building and maintaining data processes across Data Analyst and Data Science teams
Define standards for:
Data modeling and transformations
Pipeline reliability and observability
Testing, versioning, and documentation
Improve data quality, consistency, and discoverability across the organization
Performance & Reliability

Optimize systems for performance, scalability, and cost efficiency
Monitor and troubleshoot data pipelines and ML systems in production
Implement observability (logging, metrics, alerting) across data workflows
Requirements:
Strong programming skills in Python (or similar language)
Proven experience building and maintaining production-grade data pipelines
Hands-on experience with data processing frameworks (e.g., Spark or similar)
Familiarity with DBT or modern data transformation workflows
Experience working with cloud environments (AWS, GCP, or Azure)
Solid understanding of data modeling, distributed systems, and ETL/ELT patterns
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8604541
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Analyst to join the Data for AI team. This is a hands-on, customer-facing role focused on working with leading AI companies to turn real-world data into inputs that support model development and evaluation.
Youll collaborate closely with external AI teams and internal engineering and product partners to deliver data-driven solutions for specific AI use cases. The work is fast-paced, technical, and often open-ended, requiring comfort with large datasets, ambiguous requirements, and end-to-end ownership.
What does the day-to-day looks like:
Own end-to-end delivery of data solutions for AI use cases, from understanding model and product requirements to analysis, implementation, quality, and automation
Work hands-on with large, raw datasets to create high-quality data inputs that support model training, evaluation, and iteration
Apply strong quantitative analysis and data exploration skills to assess coverage, quality, and behavior of data used in AI systems
Build scripts, analyses, and reusable components in Python and SQL to support scalable and repeatable workflows
Collaborate closely with Engineering to ensure solutions are reliable, scalable, and production-ready
Partner directly with external AI teams and internal stakeholders to translate open-ended questions into concrete data outputs.
Requirements:
4+ years of hands-on experience working with large-scale data using SQL and Spark or BigQuery
Strong Python skills for data analysis, scripting, and building reusable workflows
Experience working with raw, imperfect data and turning it into reliable, high-quality outputs
Strong analytical and problem-solving skills, with the ability to break down open-ended or ambiguous requirements
Ability to take end-to-end ownership of data projects, from exploration to delivery
Some hands-on experience with LLM-based systems, such as running inference via APIs, experimenting with prompts, or participating in basic evaluation or testing workflows
Clear communication skills in English and experience working directly with external stakeholders
Nice to have:
Deeper hands-on experience with LLMs in production or experimentation, for example prompt engineering, batch inference, or structured evaluation using APIs such as OpenAI, Anthropic, or similar providers
Familiarity with agent frameworks or orchestration layers (for example LangChain, LlamaIndex)
Experience with LLM evaluation or monitoring workflows, including offline evals, prompt regression testing, or tools such as LangSmith, Weights & Biases, TruLens, or Ragas
Experience experimenting with open-source or local models (for example via Ollama, vLLM, or Hugging Face tooling)
Familiarity with cloud-based data infrastructure, including AWS.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8590074
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data & AI Project Manager to lead the delivery of AI and Generative AI-based solutions for our clients. In this role, you will own projects end-to-end - from initial scoping and requirements definition through development, deployment, and continuous improvement.
This role is well-suited for a technically oriented project or product manager who enjoys working at the intersection of data, AI, engineering, and business, and who thrives in fast-paced, complex environments.
Responsibilities:
Lead the full lifecycle of AI and data projects, from ideation and design through implementation, deployment, and ongoing optimization.
Translate business and user needs into clear system requirements, technical specifications, and structured delivery plans.
Define project scope, priorities, milestones, and deliverables, ensuring execution on time, within scope, and within budget.
Collaborate closely with development, data, and infrastructure teams to support efficient delivery and resolve technical dependencies.
Drive alignment between technical solutions and business objectives, ensuring delivered solutions generate tangible client value.
Manage multiple stakeholders on both client and internal sides, maintaining clear communication and expectations.
Support the design of intuitive, user-centered AI solutions and workflows.
Identify risks, manage trade-offs, and proactively address challenges throughout the project lifecycle.
Requirements:
Bachelors degree (B.Sc) in Computer Science, Engineering, Information Systems, Data Science, or a related field.
3+ years of experience in project or product management, with hands-on involvement in data, AI, or technology-driven initiatives.
Strong understanding of data architectures, data pipelines, analytics frameworks, and AI-driven systems.
Proven experience delivering AI or Generative AI solutions into production environments.
Ability to translate complex technical concepts into clear, structured communication for non-technical stakeholders.
Experience managing complex, multi-stakeholder projects with strong organizational and execution skills.
Analytical mindset with the ability to make data-driven decisions and assess trade-offs.
Excellent verbal and written communication skills in English.
High level of ownership, independence, and ability to perform in dynamic, fast-paced environments.
Nice to Have:
Background in AI, machine learning, or advanced analytics projects.
Experience working in consulting or client-facing delivery roles.
Familiarity with cloud platforms and modern AI tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595880
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Realize your potential by joining the leading performance-driven advertising company!
As a Staff MLOps Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
How youll make an impact:
As a Staff MLOps Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603122
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior ML Engineer - Applied AI Engineering Group
The Dream Job
It starts with you - an engineer driven to build the ML platform that turns research into reliable, production-grade intelligence. You care about reproducibility, low-friction experimentation, and infrastructure that earns the trust of the scientists and researchers who depend on it daily. You'll architect and ship our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - turning models into production capabilities across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments.
If you want to make a meaningful impact, join our mission and build the ML platform that drives Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Build and operate ML training infrastructure - distributed training pipelines, compute scheduling, and reproducible experiment workflows that data scientists rely on daily.
Own model serving and inference systems - packaging, deployment, autoscaling, A/B testing, canary rollouts, and latency/cost optimization for production models.
Run feature stores, model registries, and dataset versioning - enabling self-serve feature engineering, model lineage, and reproducible experiments across teams.
Build experiment tracking and evaluation infrastructure - automated evals, comparison dashboards, drift detection, and monitoring that give teams visibility into model behavior and performance.
Build and maintain production pipelines for training, fine-tuning workflows, and serving domain models - owning reliability, reproducibility, and scale.
Build and maintain the monitoring and observability layer - model performance tracking, data and prediction drift detection, data quality validation, and alerting.
Improve performance and cost across the ML stack - training throughput, inference latency, batch vs. real-time tradeoffs, and compute cost management.
Ship shared tooling - libraries, templates, CI/CD for models, IaC, and runbooks - while collaborating across Data Platform, AI, Data Science, Engineering, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
5+ years in software engineering, with 2+ years focused on ML infrastructure, MLOps, or data-intensive systems
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603632
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Required ML Engineering Team Lead - Applied AI Engineering Group
The Dream Job
It starts with you - a technical leader driven to build both the ML platform and the engineering team behind it. You care about reliable infrastructure, great developer experience, and growing engineers through real ownership. You'll set the technical direction for our ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - shaping how models reach production across cloud and on-prem, including air-gapped deployments. A significant part of the platform supports large language models, with unique challenges across training, evaluation, and inference in mission-critical environments. You stay close enough to the codebase to debug production issues, unblock your engineers, and make sound architecture calls.
If you want to make a meaningful impact, join our mission and lead the team that builds the ML platform driving Sovereign AI products - this role is for you.
The Dream-Maker Responsibilities
Set technical direction for the ML platform - training pipelines, model serving, feature stores, experiment tracking, and compute orchestration - through RFCs, prototypes, design reviews, and build-vs-buy decisions
Lead and grow a team of ML Engineers - hire, mentor, pair on hard problems, and raise the bar through code and design reviews
Contribute to critical systems, debug production issues, and maintain deep context on the codebase to inform technical decisions
Own operational excellence for model serving - set and enforce SLAs, run capacity planning, and keep compute costs predictable
Establish ML engineering standards - reproducible experiments, automated evals, model packaging, CI/CD for models, and observability
Support the full lifecycle of our models - from training on domain-specific data to low-latency inference powering production systems
Work closely with Data Platform, AI, Data Science, and Product teams - translate business priorities into engineering work and manage cross-team dependencies
Measure and improve developer experience - deploy friction, onboarding time, CI turnaround - as seriously as model performance.
Requirements:
6+ years in software engineering, ML engineering, or platform engineering, with hands-on experience building and operating ML infrastructure at scale.
2+ years leading an engineering team - hiring, mentoring, conducting design reviews, and shipping alongside your team
Engineering craft - Strong Python, distributed systems design, testing, secure coding, API design, CI/CD discipline, and production ownership.
ML platform & serving - Model serving frameworks (e.g., Triton, TorchServe, vLLM, Ray Serve); model packaging, deployment pipelines, and inference optimization
Training infrastructure - Distributed training pipelines (e.g., frameworks like PyTorch, JAX) experiment orchestration and reproducibility
ML lifecycle tooling - Feature stores, model registries, experiment tracking (e.g., MLflow, Weights & Biases); dataset versioning and lineage
Data pipelines - Building training and inference data pipelines; familiarity with tools like Spark, Airflow/Dagster, and streaming ingestion
Comfortable with AI coding tools like Cursor, Claude Code, or Copilot
Nice to Have:
Experience operating in constrained environments - on-premise, private cloud, or air-gapped deployments
Hands-on experience with simulation environments, synthetic data generation, or reinforcement learning workflows
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Hands-on data science or applied ML experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603603
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Product Data Director to lead our global product data strategy and governance. In this role, you will own the vision, implementation, and continuous improvement of how data is captured, structured, and leveraged across our product ecosystem. Youll partner closely with Product, Engineering, BI, and Operations teams to ensure data excellence, enable data-driven decision-making, and unlock insights that enhance our products, customer experience, and business growth.
Responsibilities:
Develop and execute a comprehensive product data strategy aligned with our business goals.
Define and own data standards, taxonomy, and governance frameworks across all product lines.
Partner with Product, BI, and Engineering to design scalable data models and infrastructure supporting product analytics, reporting, and performance tracking.
Lead a cross-functional data excellence initiative, ensuring high-quality, accurate, and consistent data.
Work with product managers to embed data-driven decision-making in product planning, experimentation, and optimization processes.
Oversee the implementation of data instrumentation and tracking for new product launches.
Collaborate with the Analytics and Data Engineering teams to ensure data availability and reliability across systems.
Requirements:
8+ years of experience in data strategy, analytics, or product data management, ideally within fintech, SaaS, or payments.
Proven track record of building and scaling product data frameworks and governance models.
Strong understanding of data architecture, pipelines, and data quality management.
Experience working with data visualization tools (e.g., Tableau, Power BI, Looker) and SQL-based analysis.
Excellent stakeholder management and communication skills, with the ability to influence senior leaders.
Bachelors or Masters degree in Data Science, Computer Science, Information Systems, or related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595035
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Senior ML Research Engineer
Israel: Tel Aviv/ Hybrid
R&D | Full Time | Job Id: 24793
Your Impact & Responsibilities:
As a Senior ML Research Engineer, you will be responsible for the end-to-end lifecycle of large language models: from data definition and curation, through training and evaluation, to providing robust models that can be consumed by product and platform teams.
Own training and fine-tuning of LLMs / seq2seq models: Design and execute training pipelines for transformer-based models (encoder-decoder, decoder-only, retrievalaugmented, etc.), and fine-tune open-source LLMs -specific data (security content, logs, incidents, customer interactions).
Apply advanced LLM training techniques such as instruction tuning, preference / contrastive learning, LoRA / PEFT, continual pre-training, and domain adaptation where appropriate.
Work deeply with data: define data strategies with product, research and domain experts; build and maintain data pipelines for collecting, cleaning, de-duplicating and labeling large-scale text, code and semi-structured data; and design synthetic data generation and augmentation pipelines.
Build robust evaluation and experimentation frameworks: define offline metrics for LLM quality (task-specific accuracy, calibration, hallucination rate, safety, latency and cost); implement automated evaluation suites (benchmarks, regression tests, redteaming scenarios); and track model performance over time.
Scale training and inference: use distributed training frameworks (e.g. DeepSpeed, FSDP, tensor/pipeline parallelism) to efficiently train models on multi-GPU / multi-node clusters, and optimize inference performance and cost with techniques such as quantization, distillation and caching.
Collaborate closely with security researchers and data engineers to turn domain knowledge and threat intelligence into high-value training and evaluation data, and to expose your models through well-defined interfaces to downstream product and platform teams.
Requirements:
5+ years of hands-on work in machine learning / deep learning, including 3+ years focused on NLP / language models.
Proven track record of training and fine-tuning transformer-based models (BERT-style, encoder-decoder, or LLMs), not just consuming hosted APIs.
Strong programming skills in Python and at least one major deep learning framework (PyTorch preferred; TensorFlow).
Solid understanding of transformer architectures, attention mechanisms, tokenization, positional encodings, and modern training techniques.
Experience building data pipelines and tools for large-scale text / log / code processing (e.g. Spark, Beam, Dask, or equivalent frameworks).
Practical experience with ML infrastructure, such as experiment tracking (Weights & Biases, MLflow or similar), job orchestration (Airflow, Argo, Kubeflow, SageMaker, etc.), and distributed training on multi-GPU systems.
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to own research and engineering projects end-to-end: from idea, through prototype and controlled experiments, to models ready for integration by product and platform teams.
Good communication skills and the ability to work closely with non-ML stakeholders (security experts, product managers, engineers).
Nice to have:
Experience with RLHF / preference optimization, safety alignment, or other humanfeedback-in-the-loop approaches to training LLMs.
Experience with retrieval-augmented generation (RAG), dense retrieval, vector databases, and embedding training.
Background in security / cyber domains such as threat detection, malware analysis, logs, or SOC tools.
Experience with multilingual models (e.g., Hebrew + English) and cross-lingual training.
Experience in a product environment where models must meet reliability, scale, and cost constraints.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597461
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Realize your potential by joining the leading performance-driven advertising company!
As a Senior MLOps Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
How youll make an impact:
As a Senior MLOps Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603186
סגור
שירות זה פתוח ללקוחות VIP בלבד