דרושים » דאטה » ML Data Engineer 24792

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 20 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join our companys AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities that will drive our companys future security solutions.
We foster a hands-on, research-driven culture where youll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.
Key Responsibilities
Your Impact & Responsibilities
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
דרישות:
What You Bring
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks.
Nice to Have המשרה מיועדת לנשים ולגברים כאחד.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541065
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 19 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Join our companys AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities which will drive our company AI future security solutions.
We foster a hands-on, research-driven culture where youll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.
Key Responsibilities
Your Impact & Responsibilities
As a Senior ML Research Engineer, you will be responsible for the end-to-end lifecycle of large language models: from data definition and curation, through training and evaluation, to providing robust models that can be consumed by product and platform teams.
Own training and fine-tuning of LLMs / seq2seq models: Design and execute training pipelines for transformer-based models (encoder-decoder, decoder-only, retrievalaugmented, etc.), and fine-tune open-source LLMs on our company-specific data (security content, logs, incidents, customer interactions).
Apply advanced LLM training techniques such as instruction tuning, preference / contrastive learning, LoRA / PEFT, continual pre-training, and domain adaptation where appropriate.
Work deeply with data: define data strategies with product, research and domain experts; build and maintain data pipelines for collecting, cleaning, de-duplicating and labeling large-scale text, code and semi-structured data; and design synthetic data generation and augmentation pipelines.
Build robust evaluation and experimentation frameworks: define offline metrics for LLM quality (task-specific accuracy, calibration, hallucination rate, safety, latency and cost); implement automated evaluation suites (benchmarks, regression tests, redteaming scenarios); and track model performance over time.
Scale training and inference: use distributed training frameworks (e.g. DeepSpeed, FSDP, tensor/pipeline parallelism) to efficiently train models on multi-GPU / multi-node clusters, and optimize inference performance and cost with techniques such as quantization, distillation and caching.
Collaborate closely with security researchers and data engineers to turn domain knowledge and threat intelligence into high-value training and evaluation data, and to expose your models through well-defined interfaces to downstream product and platform teams.
Requirements:
What You Bring
5+ years of hands-on work in machine learning / deep learning, including 3+ years focused on NLP / language models.
Proven track record of training and fine-tuning transformer-based models (BERT-style, encoder-decoder, or LLMs), not just consuming hosted APIs.
Strong programming skills in Python and at least one major deep learning framework (PyTorch preferred; TensorFlow).
Solid understanding of transformer architectures, attention mechanisms, tokenization, positional encodings, and modern training techniques.
Experience building data pipelines and tools for large-scale text / log / code processing (e.g. Spark, Beam, Dask, or equivalent frameworks).
Practical experience with ML infrastructure, such as experiment tracking (Weights & Biases, MLflow or similar), job orchestration (Airflow, Argo, Kubeflow, SageMaker, etc.), and distributed training on multi-GPU systems.
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to own research and engineering projects end-to-end: from idea, through prototype and controlled experiments, to models ready for integration by product and platform teams.
Good communication skills and the ability to work closely with non-ML stakeholders (security experts, product managers, engineers).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541239
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
דרישות:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8498339
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
🔧 What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528005
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
דרישות:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.#ENG המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8498343
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a talented Data Engineer to join our BI & Data team in Tel Aviv. You will play a pivotal role in building and optimizing the data infrastructure that powers our business. In this mid-level position, your primary focus will be on developing a robust single source of truth (SSOT) for revenue data, along with scalable data pipelines and reliable orchestration processes. If you are passionate about crafting efficient data solutions and ensuring data accuracy for decision-making, this role is for you.



Responsibilities:

Pipeline Development & Integration

- Design, build, and maintain robust data pipelines that aggregate data from various core systems into our data warehouse (BigQuery/Athena), with a special focus on our revenue Single Source of Truth (SSOT).

- Integrate new data sources (e.g. advertising platforms, content syndication feeds, financial systems) into the ETL/ELT workflow, ensuring seamless data flow and consolidation.

- Implement automated solutions for ingesting third-party data (leveraging tools like Rivery and scripts) to streamline data onboarding and reduce manual effort.

- Leverage AI-assisted development tools (e.g., Cursor, GitHub Copilot) to accelerate pipeline development

Optimization & Reliability

- Optimize ETL processes and SQL queries for performance and cost-efficiency - for example, refactoring and cleaning pipeline code to reduce runtime and cloud processing costs.

- Develop modular, reusable code frameworks and templates for common data tasks (e.g., ingestion patterns, error handling) to accelerate future development and minimize technical debt.

- Orchestrate and schedule data workflows to run reliably (e.g. consolidating daily jobs, setting up dependent task flows) so that critical datasets are refreshed on time.

- Monitor pipeline execution and data quality on a daily basis, quickly troubleshooting issues or data discrepancies to maintain high uptime and trust in the data.

Collaboration & Documentation

- Work closely with analysts and business stakeholders to understand data requirements and ensure the infrastructure meets evolving analytics needs (such as incorporating new revenue streams or content cost metrics into the SSOT).

- Document the data architecture, pipeline processes, and data schemas in a clear way so that the data ecosystem is well-understood across the team.

- Continuously research and recommend improvements or new technologies (e.g. leveraging AI tools for data mapping or anomaly detection) to enhance our data platforms capabilities and reliability and ensure our data ecosystem remains a competitive advantage.
Requirements:
4+ years of experience as a Data Engineer (or in a similar data infrastructure role), building and managing data pipelines at scale, with hands-on experience with workflow orchestration and scheduling (Cron, Airflow, or built-in scheduler tools)
Strong SQL skills and experience working with large-scale databases or data warehouses (ideally Google BigQuery or AWS Athena).
Solid understanding of data warehousing concepts, data modeling, and maintaining a single source of truth for enterprise data.
Demonstrated experience in data auditing and integrity testing, with ability to build 'trust-dashboards' or alerts that prove data reliability to executive stakeholders
Proficiency in a programming/scripting language (e.g. Python) for automating data tasks and building custom integrations.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8524462
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
18/01/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer - ML & GenAI Platform
Job Description
As a data engineer in our data science group, youll build, use, and enhance our ML and GenAI platforms, powering data products and LLM-based services across. Youll design and operate data and MLOps infrastructure for batch inference, knowledge-base workflows, evaluation metrics, and fine-tuning pipelines. Collaborating with data scientists, product managers, and engineers, youll bring AI features into production at scale.
Extend and implement new capabilities in ML and GenAI platforms
Design and maintain scalable, reliable data architectures and pipelines for model training, LLM fine-tuning, inference, and evaluation
Develop and maintain Airflow DAGs for metrics calculation, batch inference, KB workflows, and other data science projects
Collaborate with data scientists and engineers to integrate models and GenAI features into production systems Ensure high-quality code through best practices in testing, code reviews, documentation, and CI/CD for data and ML workloads
Work with tools and platforms like Airflow, Spark, Docker, Python Serverless, Kafka, and multiple cloud environments to deliver scalable, cost-effective GenAI solutions.
Requirements:
Were looking for a technically proficient engineer with expertise in data, a passion for ML and GenAI, and the ability to identify and solve complex problems.
3+ years of experience in data engineering or building data-intensive systems
Proficient in Python and SQL with hands-on experience building data pipelines and production-grade services
Practical experience with Airflow, Spark, and Docker
Knowledge of or strong interest in data science and machine learning
Experience or strong interest in GenAI platforms and LLM-based services
Excellent communication and collaboration skills to work with cross-team stakeholders
Experience in cloud environments like AWS and/or GCP - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8506295
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/01/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
It starts with you - a senior ML engineer responsible for building, training, evaluating, and operating machine learning systems in production. The role focuses on data pipelines, model training, experimentation, evaluation, and scalable deployment.
If you want to grow your skills building AI products for mission-critical AI, join mission - this role is for you.
:Responsibilities
Design, train, and evaluate ML models for production use.
Build and maintain data pipelines for training, validation, and inference.
Own experimentation workflows: feature engineering, training runs, and comparison.
Implement model evals, monitoring, and drift detection.
Package and deploy models to production systems.
Optimize training and inference performance, cost, and reliability.
Collaborate with data, platform, and product teams.
Mentor engineers and promote ML engineering best practices.
Requirements:
4+ years software engineering experience with 2+ years applied ML in production.
Strong foundations in machine learning, statistics, and data analysis.
Hands-on experience with model training frameworks (e.g., PyTorch, TensorFlow, JAX).
Experience with distributed training and large-scale datasets.
Experience building data pipelines, feature engineering, and dataset versioning.
Proven experience designing and operating ML evals, experiment tracking, and monitoring.
Familiarity with feature stores, model registries, and ML lifecycle management.
Experience with model serving patterns and production deployment.
Proficiency in Python and strong system design skills.
Experience deploying ML systems on Kubernetes or similar platforms.
Familiarity with GPU acceleration and performance optimization
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8504212
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Machine Learning Engineer II - GenAI Applications
26947
About the team:
This opening is for the GenAI Applications Team within the Data & AI Marketplace department.
The GenAI Applications team is responsible for designing and delivering agentic, ML-powered solutions for some of our most impactful products, including booking search experiences, trip planning, and trip helpfulness. The team builds AI-driven applications and conversational agents, such as chatbots and intelligent assistants, that significantly enhance the end-to-end customer experience.
Role Description:
As a Machine Learning Engineer, you will work closely with experienced engineers and ML scientists to build scalable, production-grade GenAI applications. Your work will focus on designing, training, and deploying ML systems leveraging LLMs,, recommendation systems, and agent-based architectures, using state-of-the-art technologies. These solutions will directly power customer-facing experiences and play a key role in shaping the future of AI-driven travel products.
Key Job Responsibilities and Duties:
Deploying machine learning models: Design, develop and deploy in collaboration with scientists, scalable machine learning models and algorithms that provide content related insights and generative AI applications, ensuring scalability, efficiency, and accuracy.
Evaluating possible architecture solutions by taking into account cost, business requirements, emerging technologies, and technology requirements, like latency, throughput, and scale.
Generative AI Development: Contribute to the development of generative models such as GPT (Generative Pre-trained Transformer) variants or similar architectures for creative content generation, Q&A, chatbots, translation or other innovative applications.
Deployment and integration: Work closely with software engineers to integrate machine learning models into production systems. Ensure seamless deployment and efficient model inference in real-time environments. Collaborate with DevOps to implement effective monitoring and maintenance strategies.
Owning a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and acting accordingly when violated.
Maintain clean, scalable code, ensuring reproducibility and easy integration of models into production environments, including CI/CD.
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions.
Requirements:
We are looking for driven MLEs who enjoy solving problems, who initiate solutions and discussions and who believe that any challenge can be scaled with the right mindset and tools.
We have found that people who match the following requirements are the ones who fit us best:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 4 years of experience as a Machine Learning Engineer or a similar role, with a consistent record of successfully delivering ML solutions.
Strong programming skills in languages such as Python and Java.
Experience with cloud frameworks like AWS sagemaker for training, evaluation and serving models using TensorFlow, PyTorch, or scikit-learn.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Experience with data at scale using MySQL, Pyspark, Snowflake and similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Deep understanding of machine learning algorithms, statistical models, and data structures.
Experience in deploying large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8498320
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/01/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Machine Learning Engineer, youll work on cutting-edge
code-focused LLMs and AI agent systems
that power our companys next-generation developer platform. Youll be at the center of research, model training, and productionization of intelligent systems that understand software deeply, collaborate with developers, and help automate engineering workflows end-to-end. Your work will immediately impact millions of engineers worldwide.
Responsibilities:
Push LLM Innovation: Research, design, and fine-tune domain-specific LLMs for code generation, refactoring, debugging, and multi-turn reasoning.
Agent-Oriented Development: Build multi-agent coding systems that integrate retrieval-augmented generation (RAG), code execution, testing, and tool use to create autonomous, context-aware coding workflows.
Production-Grade AI: Own the training-to-inference pipeline for large code models-optimize inference with quantization, distillation, and caching techniques.
Rapid Experimentation: Prototype and validate ideas quickly; leverage reinforcement learning, human feedback, and synthetic data generation to push accuracy and reasoning.
Cross-Functional Collaboration: Partner with product, engineering, and design teams to ship AI-powered features that help developers focus on high-impact work.
Scale the Platform: Contribute to distributed training, scalable serving systems, and GPU/TPU-efficient architectures for ultra-low-latency developer tools.
Requirements:
2+ years of hands-on experience designing, training, and deploying machine-learning models
M.Sc. or higher in Computer Science / Mathematics / Statistics or equivalent from a university, or B.Sc. with strong hands-on ML experience
Practical experience with Natural Language Processing (NLP) and LLMs
Experience with data acquisition, data cleaning, and data pipelines
A passion for building products and helping people, both customers and colleagues
All-around team player, fast, self-learning individual
Nice to have:
3+ years of development experience with a passion for excellence
Experience building AI coding assistants, code reasoning models, or dev-focused LLM agents.
Familiarity with RAG, function-calling, and tool-using LLMs.
Knowledge of model optimizations (quantization, distillation, LoRA, pruning).
Startup or product-driven ML experience, especially in high-scale, latency-sensitive environments.
Contributions to open-source AI or developer tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8488273
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer, Product Analytics
As a Data Engineer, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining us, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
4+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8536011
סגור
שירות זה פתוח ללקוחות VIP בלבד