משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
להשיב נכון: "ספר לי על עצמך"
שימו בכיס וצאו לראיון: התשובה המושלמת לשאלה שמצ...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Scientist who will join our growing Detection group. You'll be a significant part of the development of our state-of-the-art anomaly detection models to find and protect against nation-sponsored cyber-attacks. In this role, you will work with product, engineering, and cyber teams to train, evaluate, and deploy anomaly detection models on a massive scale.
The Responsibilities
Analyze, transform and clean large, complex data sets from various sources to ensure data quality and integrity for analysis.
Conduct hands-on research and development of state-of-the-art models and algorithms.
Extract relevant features from structured and unstructured data sources, design and engineer new features and feature selection methodologies to enhance model performance.
Build, train, and optimize machine learning models using state-of-the-art techniques, and evaluate model performance using appropriate metrics.
Lead research projects end-to-end from problem formulation, ideation, and experimental design to prototyping and transition into production.
Explore new methodologies and develop creative approaches to solve complex challenges.
Requirements:
5+ Years as a Data Scientist with proven production-level impact.
Master's degree in computer science, mathematics or Engineering with focus on machine learning.
Proven track record designing and training anomaly-based models for large datasets.
Strong programming skills in Python and familiarity with modern ML tooling and frameworks.
Experience conducting applied research and working with transformers, open-source LLMs, or other advanced deep learning architectures.
Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environment.
Excellent problem-solving abilities and a strong experimental mindset.
Effective collaborator with strong communication skills.
Curiosity and a passion for learning new technologies, methods, and domains.
Background in the cyber security domain - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561201
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560110
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560108
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Machine Learning Scientist I - GenAI Applications
26992
About the team:
This opening is for the GenAI Applications Team within the Data & AI Marketplace department.
The GenAI Applications team is responsible for designing and delivering agentic, ML-powered solutions for some of our most impactful products, including booking search experiences, trip planning, and trip helpfulness. The team builds AI-driven applications and conversational agents, such as chatbots and intelligent assistants, that significantly enhance the end-to-end customer experience.
Role Description:
As a Senior Machine Learning Scientist, you will work closely with engineers and to design, develop, and evaluate machine learning solutions for scalable, customer-facing GenAI applications. Your work will focus on researching, training, fine-tuning, and rigorously evaluating models leveraging LLMs, recommendation systems, and agent-based architectures, using state-of-the-art techniques. You will drive experimentation, define success metrics, and translate insights into impactful AI solutions that shape the future of intelligent travel products.
Key Job Responsibilities and Duties:
Explore and apply state-of-the-art techniques in multimodal machine learning.
Train innovative ML models (NLP, CV, LLM-finetuning), build algorithms, and engineering approaches to drive business impact..
Coding skills: ensure implementation of reusable frameworks (clean and scalable code).
Conduct data analysis with detailed metrics to evaluate models performance, labels quality, features exploration.
Work closely with machine learning engineers to ensure the model's latency/throughput meets product requirements and ensure deployment of your model to production.
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions.
Requirements:
Advanced knowledge and experience in Computer Vision and Natural Language Processing, engineering aspects of developing ML and GenerativeAI models at scale.
Experience designing and executing end-to-end research and development plans and generating impact through large-scale machine learning model development. Preferably evidenced by peer-reviewed publication, patents, open sourced code or the like.
Relevant work or academic experience (MSc + 6 years of working experience, or PhD + 4 years of working experience), involved in the application of Machine Learning to business problems.
Masters degree, PhD or equivalent experience in a quantitative field (e.g. Computer Science, Engineering Mathematics, Artificial Intelligence, Physics, etc.).
Experience on multiple machine learning facets: working with large data sets, model development, statistics, experimentation, data visualization, optimization, software development.
Experience collaborating cross functionally in the development of machine learning products (e.g. Developers, UX specialists, Product Managers, etc.).
Strong working knowledge of Python, Java, Kafka, Hadoop, SQL, and Spark or similar technologies. Working experience with version control systems.
Excellent English communication skills, both written and verbal.
Successfully driving technical, business and people related initiatives that improve productivity, performance and quality while communicating with stakeholders at all levels
Leading by example, gaining respect through actions, not your title. Developing your team and motivating them to achieve their goals. Providing feedback timely and managing your key team performance indicators.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560103
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/02/2026
Location: Herzliya
Job Type: Full Time
Required Senior Data Engineer
Join our core platform engineering team, developing our AI-powered automotive data management platform.
We are developing the next generation data-driven products for the Automotive industry, focusing on cybersecurity (XDR) and vehicle quality. Our products monitor and secure millions of vehicles worldwide and help automakers leverage connected vehicle data to deliver cyber resilience, safety, customer satisfaction and increase brand loyalty.
Our Data Engineering & Data Science Group leads the development of our Iceberg-based data platform, including data lake, query engine, and ML-Ops tools, serving as a solid AI-ready foundation for all our products.
At the core of our Engineering Team, you will build and operate scalable, production-grade customer-facing data and ML platform components, focusing on reliability and performance.
Technological background and focus: Iceberg, Trino, Prefect, GitHub Actions, Kubernetes, JupyterHub, MLflow, dbt
This role is full-time and is Herzliya, Israel based.
Responsibilities
Design, build, and maintain scalable data pipelines to ingest and transform batch data on our data lake, enabling analytics and ML with strong data quality, governance, observability, and CI/CD.
Build and expand our foundational data infrastructure, including our data lake, analytics engine, and batch processing frameworks.
Create robust infrastructure to enable automated pipelines that will ingest and process data into our analytical platforms, leveraging open-source, cloud-agnostic frameworks and toolsets.
Develop and maintain our data lake layouts and architectures for efficient data access and advanced analytics.
Develop and manage orchestration tools, governance tools, data discovery tools, and more.
Work with other team members of the engineering group, including data scientists, data architects, and data analysts, to provide solutions using a use case-based approach that drives the construction of technical data flows.
Requirements:
BSc/BA in Computer Science, Engineering or a related field
At least 6 years of experience with designing and building data pipelines, analytical tools and data lakes
Experience with the data engineering tech stack: ETL & orchestration tools (e.g. Airflow, Argo, Prefect), and distributed data processing tools (e.g Spark, Kafka, Presto)
Experience with Python is a must
Experience working in a containerized environment (e.g. k8s)
Experience working with open-source products
End-to-end ownership mindset with a proactive, production-first approach
Development experience, using a general purpose programming language (Java, Scala, Kotlin, Go, etc.) - An advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560075
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/02/2026
Location: Herzliya
Job Type: Full Time
Required Data Engineer
Description
Join our core platform engineering team, developing our AI-powered automotive data management platform.
We are developing the next generation data-driven products for the Automotive industry, focusing on cybersecurity (XDR) and vehicle quality. our products monitor and secure millions of vehicles worldwide and help automakers leverage connected vehicle data to deliver cyber resilience, safety, customer satisfaction and increase brand loyalty.
Our Data Engineering & Data Science Group leads the development of our Iceberg-based data platform, including data lake, query engine, and ML-Ops tools, serving as a solid AI-ready foundation for all our products.
At the core of our Engineering Team, you will build and operate scalable, production-grade customer-facing data and ML platform components, focusing on reliability and performance.
Technological background and focus: Iceberg, Trino, Prefect, GitHub Actions, Kubernetes, JupyterHub, MLflow, dbt
This role is full-time and is Herzliya, Israel based.
Responsibilities
Design, build, and maintain scalable data pipelines to ingest and transform batch data on our data lake, enabling analytics and ML with strong data quality, governance, observability, and CI/CD.
Build and expand our foundational data infrastructure, including our data lake, analytics engine, and batch processing frameworks.
Create robust infrastructure to enable automated pipelines that will ingest and process data into our analytical platforms, leveraging open-source, cloud-agnostic frameworks and toolsets.
Develop and maintain our data lake layouts and architectures for efficient data access and advanced analytics.
Develop and manage orchestration tools, governance tools, data discovery tools, and more.
Work with other team members of the engineering group, including data architects, data analysts, and data scientists, to provide solutions using a use case-based approach that drives the construction of technical data flows.
End-to-end ownership mindset with a proactive, production-first approach.
Requirements:
BSc/BA in Computer Science, Engineering or a related field
At least 4 years of experience with designing and building data pipelines, analytical tools and data lakes
Experience with the data engineering tech stack: ETL & orchestration tools (e.g. Airflow, Argo, Prefect), and distributed data processing tools (e.g Spark, Kafka, Presto)
Experience with Python is a must
Experience working with open-source products - Advantage
Experience working in a containerized environment (e.g. k8s) - Advantage
Development experience, using a general purpose programming language (Java, Scala, Kotlin, Go, etc.) - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560068
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Staff Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Staff Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Staff Algo Data Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8559783
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Scientist
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Data Scientist, youll play a vital role in turning algorithm prototypes into shippable products that will have a significant and immediate impact on the companys revenue
How youll make an impact:
As a Senior Data Scientist, youll bring value by:
Be responsible for the entire algorithmic lifecycle in the company: data analytics, prototyping of new ideas, implementing algorithms models in a production environment and then monitoring and maintaining them
Turn algorithm prototypes into shippable products that will have a significant and immediate impact on the companys revenue
Work on a daily basis with some of the hottest trends in todays job market: machine/deep learning, big data analytics/engineering and cloud computing
Apply your scientific knowledge and creativity to analyze large volumes of diverse data and develop algorithmic solutions and models to solve complex problems
Influence directly on the way billions of people discover the internet
Work on projects such as Internet Personalization, Content Feed, Real Time Bidding, Video Recommendations and much more
Our tech stack:
Python, Java, TensorFlow, Spark, Kafka, Cassandra, HDFS, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes and Docker.
Requirements:
M.Sc. or PhD. in Computer Science, Mathematics, Engineering or a related field
Strong knowledge in Python
Good knowledge in Java, Scala or C++
Familiarity with statistical modeling techniques
5+ years of hands on experience with coding machine learning/statistical modeling based solutions
Experience in data analysis and visualization and strong knowledge in SQL
Possess strong problem solving and critical thinking skills
Bonus points if you have:
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8559398
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8559383
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a hands on, proactive, and impact driven Senior Business Analyst to join our data team. In this role, youll shape how the company measures success and help embed a strong data driven culture across every domain, from business operations to product, finance, and beyond.

This is an opportunity for someone who thrives in dynamic, fast changing environments, loves building from scratch, and is just as comfortable running deep analyses and leading cross functional measurement discussions as they are partnering with data engineering to design scalable data models that empower the entire organization.

In this role you will
Collaborate with cross functional teams to understand business objectives and develop data driven solutions
Lead measurement and metric discussions with diverse stakeholders, translating needs into clear, consistent, and actionable definitions
Conduct deep dive analyses to uncover insights and deliver impactful recommendations
Partner with data engineering to design scalable data models and improve data accessibility
Build clear, intuitive dashboards and reports to track performance and enable self serve analytics
Proactively monitor business metrics and provide timely insights to optimize operations and identify opportunities
Promote a data driven culture by empowering teams to use data confidently in decision making
Mentor teammates and contribute to the growth and maturity of the data function
Requirements:
4+ years of experience in business analysis, product analytics, or data analysis, ideally in a fast moving environment
Strong SQL skills and proficiency with BI tools (Metabase, Looker, Tableau, or similar); experience with data warehouse concepts and cloud platforms
Skilled at framing problems, defining metrics, and aligning teams to create a shared understanding of success
A hands on, independent self starter who can own projects end to end, from problem definition through delivery
Excellent communication, facilitation, and storytelling skills, able to translate complex data into actionable insights for technical and non-technical audiences
Strong problem solving and critical thinking skills, with the ability to bring clarity and drive decisions forward
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8558242
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Head Of Tech Data
About you
As our Head of Tech Data, you will be a senior member of the Tech Leadership team, responsible for defining and driving our Tech Data strategy across the organization.
This is a foundational leadership role. You will shape how data is consumed across our technology organization, from principles and operating model to tooling, metrics, and ways of working. You will assess our current state, identify the most meaningful gaps and risks, and define where data can create the greatest impact for our teams and products.
You will operate comfortably at both the strategic and execution levels, influencing leadership decisions, aligning senior stakeholders, and getting hands on when needed. You will lead the creation of standards, frameworks, and best practices that scale and have lasting impact, while working closely with Product, Design, Engineering, and the broader Data organization.
Job responsibilities
Lead and develop the Product Analytics team within Tech.
Define and own the Tech Data strategy, shaping how data supports decision making across the organization.
Act as the strategic and technical authority for Tech Data, assessing current practices and setting the operating model, infrastructure, standards, and roadmap.
Establish clear ownership and governance to ensure consistency, quality, and trust.
Partner closely with Engineering, Product, Design, and Data to embed data into every stage of the development lifecycle.
Drive innovation in Tech Data, including the use of AI to enable speed & scale.
Establish Tech Data metrics and reporting that give leadership visibility and support clear, actionable decisions.
Own the lifecycle of data products, from discovery and modeling through KPI definition, adoption, and maintenance.
Shape and lead company KPIs, ensuring consistency across dashboards & AI outputs.
Ensure data products are trusted, scalable & used to drive measurable impact.
Requirements:
10+ years of experience: in Data Product, Product Analytics, or BI roles, with senior level ownership of product domains and critical KPIs.
5+ years of proven experience: leading, mentoring, and scaling high performing data analytics teams and practices over time.
Data Culture & Ownership: Proven track record of driving culture changes around data ownership and self-service discovery.
Cross-Functional Influence: A strong communicator able to align stakeholders across Engineering, Product, and Design, ensuring data is embedded in every stage of the development lifecycle.
B2B SaaS expertise: Deep understanding of the metrics and lifecycles inherent in fast-growing SaaS environments.
Product Insights & Modeling: Hands-on experience with SQL, Pendo, Snowflake, Tableau and analytical models to drive business impact and product innovation.
AI & Innovation: Experience leveraging AI and LLMs for conversational analytics, with the ability to translate business logic into a semantic layer for AI-driven insights.
Strategic Thinking: Ability to define a "Tech Data Strategy" that aligns product roadmaps with data capabilities.
Results-oriented and accountable, with the ability to operate at both the strategic level and get "hands-on" when needed.
Governance & Trust: Experience establishing clear ownership and "Trust by Design" to ensure product data is consistent, high-quality, and scalable.
Fluent English & Hebrew proficiency (written and spoken).
It Will Be Great If You Have:
Experience in HR, Payroll, with an understanding of people data & compliance.
Exposure to large scale SaaS architectures; microservices, APIs &integrations.
Background in hands on software development experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8558215
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers - while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Job responsibilities
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects - from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills:
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8558192
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a skilled Data Scientist to join our dynamic team. As a Data Scientist specializing in image analysis and text reading, you will play a pivotal role in developing and enhancing our AI-driven solutions for quick and accurate component inspection in electronic manufacturing.

Responsibilities
Develop and implement advanced algorithms for image analysis and text reading in the context of component inspection.
Collaborate closely with cross-functional teams to enhance existing models and drive continuous improvement.
Work with large-scale datasets, applying statistical and machine learning techniques to extract meaningful insights.
Stay updated with the latest advancements in the field of image analysis and text reading and apply them to enhance our solutions.
Requirements:
Solid experience in data science, with a focus on image analysis and text reading.

Proficiency in programming languages such as Python and familiarity with relevant libraries (e.g., TensorFlow, PyTorch, OpenCV).

Strong understanding of statistical modeling, machine learning, and deep learning algorithms.

Experience working with large-scale datasets and applying data preprocessing techniques.

Excellent problem-solving skills and the ability to work independently as well as in a team environment.

Strong communication and presentation skills, with the ability to convey complex concepts to non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8557453
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Petah Tikva
Job Type: Full Time
Were looking for a highly skilled and motivated Senior Data Engineer to join the Resolve (formerly DevOcean) team at our company. In this role, youll be responsible for designing, building, and optimizing the data infrastructure that powers our SaaS platform. Youll play a key role in shaping a cost-efficient and scalable data architecture while building robust data pipelines that serve analytics, search, and reporting needs across the organization.
Youll work closely with our backend, product, and analytics teams to ensure our data layer remains fast, reliable, and future-proof. This is an opportunity to influence the evolution of our data strategy and help scale a cybersecurity platform that processes millions of findings across complex customer environments.
Roles and Responsibilities:
Design, implement, and maintain data pipelines to support ingestion, transformation, and analytics workloads.
Collaborate with engineers to optimize MongoDB data models and identify opportunities for offloading workloads to analytical stores (ClickHouse, DuckDB, etc.).
Build scalable ETL/ELT workflows to consolidate and enrich data from multiple sources.
Develop data services and APIs that enable efficient querying and aggregation across large multi-tenant datasets.
Partner with backend and product teams to define data retention, indexing, and partitioning strategies to reduce cost and improve performance.
Ensure data quality, consistency, and observability through validation, monitoring, and automated testing.
Contribute to architectural discussions and help define the long-term data platform vision.
Requirements:
8+ years of experience as a Data Engineer or Backend Engineer working in a SaaS or data-intensive environment.
Strong proficiency in Python and experience with data processing frameworks (e.g., Pandas, PySpark, Airflow, or equivalent).
Deep understanding of data modeling and query optimization in NoSQL and SQL databases (MongoDB, PostgreSQL, etc.).
Hands-on experience building ETL/ELT pipelines and integrating multiple data sources.
Familiarity with OTF technologies and analytical databases such as ClickHouse, DuckDB and their role in cost-efficient analytics.
Experience working in cloud environments (AWS preferred) and using native data services (e.g., Lambda, S3, Glue, Athena).
Strong understanding of data performance, storage optimization, and scalability best practices.
Nice to Have:
Experience with streaming or CDC pipelines (e.g., Kafka, Debezium).
Familiarity with cloud security best practices and data governance.
Exposure to multi-tenant SaaS architectures and large-scale telemetry data.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8556159
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior Data Engineer, you will be helping us design and build a flexible and scalable system that will allow our business to move fast and innovate. You will be expected to show ownership and responsibility for the code you write, but it doesn't stop there. you are encouraged to think big and help out in other areas as well.

Responsibilities:

Designing and writing code that is critical for business growth
Mastering scalability and enterprise-grade SAAS product implementation
Sense of ownership - leading design for new products and initiatives as well as integrating with currently implemented best-practices
Building and owning production-grade ETL/ELT pipelines that power analytics, ML training, and real-time AI systems
Designing data architectures that support agentic systems, including:
Embeddings and vector-based retrieval
RAG pipelines
Feedback loops and continuous improvement
Review your peer's design and code
Work closely with product managers, peer engineers, and business stakeholders
Requirements:
5+ years of hands-on experience as Software Engineer with Strong Python skills (TypeScript / Node.js is a plus)
Hands on experience in managing major clouds vendors infrastructure (AWS, GCP, Azure)
Proficiency with SQL, modeling and working with relational and non relational databases, and pushing them past their limits
Hands on experience in designing and implementing ML-aware data pipelines (Spark, Airflow), distributed systems and restful APIs
Experience or strong interest in LLMs and agentic systems, including:
Agentic workflows (LangChain, LangGraph)
RAG patterns, Vector databases and embeddings
Evaluating and monitoring AI-driven systems (Langfuse LangSmith)
Familiarity with ML & AI tooling, such as:
Feature stores, training pipelines, or model-serving data flows
ML platforms (MLflow, SageMaker, Vertex, etc.)
Experience with CI/CD, Docker, and Kubernetes
The ability to lead new features from design to implementation, taking into consideration topics such as performance, scalability, and impact on the greater system
Comfortable operating in a fast-moving startup with high ownership and low process
Enjoy communicating and collaborating, sharing your ideas and being open to honest feedback
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8556060
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו