משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
26/02/2026
Location: Jerusalem
Job Type: Full Time
We're looking for a Data Engineer with 4+ year experience to join our Data Engineering team and help us build and scale our production-grade data platform. You'll work on high-performance systems built on self-hosted ClickHouse, optimize complex data pipelines, and collaborate closely with Product, Analytics, and Infrastructure teams to deliver reliable, fast, and scalable data solutions.

This is a hands-on technical role where you'll have a significant impact on how we ingest, model, store, and serve data that powers our analytics and AI-driven products.
Youll play a key role in shaping the direction of our data platform and have meaningful ownership over critical components of our architecture.

What You'll Do:
Data Modeling & Architecture
Design and evolve data models that reflect business logic and support analytical use cases
Collaborate with the BI and Analytics teams to understand data requirements and translate them into efficient schemas
Performance Optimization
Optimize ClickHouse schemas, partitioning strategies, indexing, and compression
Profile and tune slow queries to improve performance and reduce costs
Implement systems that ensure data quality, consistency, and operational efficiency (e.g., deduplication, validation, anomaly detection)
Monitor pipeline health, data freshness, and query performance with appropriate alerting mechanisms
SQL Compiler Development
Develop and maintain the SQL Compiler layer that translates high-level queries into optimized ClickHouse execution plansImplement query optimization and rewriting strategies to improve performanceDebug and resolve compiler issues to ensure accurate and efficient query translation

Data Pipeline Development & Collaboration
Review and advise the Integration team on pipeline architecture, performance, and best practices.
Provide guidance on data modeling, schema design, and optimization for new data sources.
Troubleshoot and maintain existing pipelines when issues arise or optimization is needed
Ensure data freshness, reliability, and quality across all ingestion pipelines.
Collaboration & Support
Work closely with the Integration team to ensure smooth data ingestion from new sources.
Partner with Infrastructure to support high availability and disaster recovery
Support other teams across the company in accessing and using data effectively.
Requirements:
Excellent communication and collaboration skills
English at a high level, written and spoken required
Ability to work from our Jerusalem office (located in the Central Bus Station next to the train) 2 times a week (Monday & Wednesday) is required
Strong attention to detail, ownership mentality, and ability to work independently
Quick learner who can dive into new codebases, technologies, and systems independently
Hands-on mentality - not afraid to roll up your sleeves, dig into unfamiliar code, and work across the stack (including backend when needed)
4+ years of experience as a Data Engineer
Strong problem-solving skills for complex data challenges at scale - ability to debug performance issues, data inconsistencies, and system bottlenecks in high-volume environments
Experience with data modeling and schema design for analytical workloads
Strong proficiency in SQL and experience with complex analytical queries
Hands-on experience building and maintaining data pipelines (ETL/ELT)
Ability to troubleshoot and optimize systems handling large data volumes (millions+ rows, complex queries, high throughput)
Knowledge of query optimization techniques and execution planning
Familiarity with columnar databases (ClickHouse, BigQuery, Redshift, Snowflake, or similar). Columnar DB experience is a big plus.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563430
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Data Team Lead to build, lead, and scale our data function. This is a hands-on leadership role that combines strategic thinking, technical excellence, and people management. You will build and manage a team of Data Engineers and Data Analysts, while working closely with senior management and cross-functional leaders to translate data into measurable business impact.
This role is ideal for someone who thrives in a fast-paced startup environment, enjoys influencing decision-making at the highest levels, and is passionate about embedding a data-driven mindset across the organization.
What You'll Do:
Lead and mentor the Data team, and support their professional growth while achieving impact goals of the marketing data structure such as: multi touch attribution model, creative analytics and automated data governance processes.
Define and own the data vision, roadmap, infrastructure, and best practices aligned with company objectives.
Act as a trusted data partner to executive leadership and senior stakeholders, influencing strategy through insights and analysis while promoting a strong data-driven culture.
Oversee the design, build, and maintenance of a scalable and reliable data warehouse.
Lead the design, implementation, and optimization of ETL/ELT pipelines to integrate data from multiple internal and external sources.
Ensure high standards for data quality, governance, integrity, and documentation.
Translate complex datasets into clear, actionable insights that support the different departments' needs and the company growth through operational excellence.
Support advanced analytics use cases including forecasting, experimentation (A/B testing), and performance measurement.
Review and elevate dashboards, reports, and analyses to ensure clarity, accuracy, and executive relevance.
Identify gaps in data collection and proactively propose solutions to improve visibility and decision-making.
Requirements:
Bachelors or Masters degree in Analytics, Statistics, Mathematics, Data Science, Computer Science, or a related field - Preferred
5+ years of experience in data analytics or data engineering leadership roles, preferably in a SaaS or technology-driven environment.
Experience building data infrastructure, standards, and best practices from the ground up.
Previous experience leading projects and mentoring or managing team members - a must.
High attention to detail and commitment to data accuracy and quality.
Proactive, collaborative, and comfortable operating in a fast-growing startup environment.
Technical Skills
Strong proficiency in SQL and Python for data manipulation and analysis.
Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, dbt or similar).
Solid understanding of data warehousing concepts and platforms (e.g., Snowflake, Redshift, BigQuery).
Experience with data visualization tools such as Tableau, Power BI, or Looker.
Familiarity with marketing and product analytics tools: user acquisition ad managers APIs, Salesforce and HubSpot data and integrations.
Strong understanding of SaaS metrics including ARR, churn, LTV, CAC, and cohort analysis.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563392
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an Experienced Data Engineer to join our marketing team and take end-to-end ownership of our data platform and production data pipelines. In this role, you will be responsible for building robust, scalable, and observable data systems that power analytics, reporting, and downstream business use cases. You will work deeply hands-on with data infrastructure, modeling, and orchestration, and act as a key technical partner to Marketing, Sales Product and Business and Finance teams.
This role suits someone who enjoys working close to the metal, designing systems that scale, and solving ambiguous data problems in a dynamic startup environment. You will play a critical role in shaping how data flows through the company, setting engineering standards, and ensuring data is trustworthy, performant, and ready for growth.
What You'll Do:
Design, build, and maintain scalable, reliable data pipelines and data warehouse architectures to support analytics and business intelligence needs.
Own the end-to-end ETL/ELT processes - ingesting data from internal and external sources, transforming it, and making it analytics-ready.
Model and optimize data structures (fact tables, dimensions, semantic layers) to support performant querying and reporting.
Ensure high standards of data quality, integrity, observability, and reliability across all data assets.
Partner closely with Analytics, Product, Marketing, and Finance teams to understand data requirements and deliver robust data solutions.
Implement monitoring, alerting, and testing frameworks to proactively identify data issues.
Optimize warehouse performance and cost efficiency (query optimization, partitioning, clustering, etc.).
Identify gaps in data collection and work with engineering teams to improve instrumentation and data availability.
Support experimentation and analytics use cases by enabling clean, trustworthy datasets for A/B testing and analysis.
Document data models, pipelines, and best practices to support scale and knowledge sharing.
Requirements:
Bachelors or Masters degree in Computer Science, Data Engineering, Software Engineering, or a related technical field.
3-5 years of hands-on experience as a Data Engineer, preferably in a SaaS or technology-driven environment.
Strong experience designing and maintaining data warehouses (e.g., Snowflake, BigQuery, Redshift).
Proven expertise with ETL/ELT tools and frameworks (e.g., Airflow, dbt, Talend, SSIS, Informatica, or similar).
Advanced SQL skills and solid proficiency in Python (or similar languages) for data processing and orchestration.
Strong understanding of data modeling, warehousing best practices, and analytics engineering concepts.
Experience integrating data from business systems such as Salesforce, HubSpot, or other SaaS platforms.
Familiarity with SaaS metrics and business concepts (ARR, churn, LTV, CAC) - from a data modeling perspective.
Experience supporting BI tools and analytics consumers (Tableau, Looker, Power BI, etc.).
Strong problem-solving skills, attention to detail, and a passion for building reliable data foundations.
Excellent communication skills and the ability to collaborate across technical and non-technical teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563348
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
26/02/2026
Location: Or Yehuda
Job Type: Full Time
Were looking for an experienced Data & BI Manager to lead and shape our data domain as part of our IT Applications group.

We are embarking on a major data transformation journey, centered around the implementation of Microsoft Fabric end-to-end. This is not a maintenance role - its a build, lead, and scale opportunity.

You will play a critical role in defining our data vision, driving architecture decisions, leading a large-scale BI transformation, and gradually building and managing an internal Data & BI team. This role sits at the intersection of technology, business, and leadership, with direct impact on decision-making across the organization.

Your mission:

Leadership & Ownership:
Own and lead the Data & BI domain within IT Applications.
Lead and professionally manage the Data & BI team (team build-up over time).
Take a central role in the organization-wide transformation to Microsoft Fabric.

Strategy, Architecture & Innovation:
Define the long-term Data & BI strategy, including technologies, platforms, and methodologies (e.g. Data Governance, Data Mesh).
Design, build, and maintain the companys data architecture:
Data Warehouse, Data Lake, Data Marts, ETL/ELT pipelines.
Lead the adoption of advanced analytics capabilities, including AI / GenAI, to generate insights and optimize business processes.
Enable self-service BI by implementing tools and best practices that empower analysts and business users.

Delivery & Business Partnership:
Lead the full lifecycle of Data & BI solutions - from requirements gathering to production.
Own data quality, data integrity, and information security across all data layers.
Work closely with senior management and business leaders to understand needs, translate them into technical solutions, and clearly communicate insights and outcomes.
Requirements:
3+ years of experience managing BI/Data teams or leading large-scale data projects.

5+ years of hands-on experience in Data Warehouse / BI / Data Engineering.

Experience working with cloud platforms (Azure / AWS / GCP); strong advantage for Microsoft Fabric.

Strong understanding of SQL, Data Lake architectures, ETL/ELT processes, and BI tools (Power BI, Qlik, or similar).

Experience with Big Data, Data Science, or Real-Time Analytics - an advantage.

A true leader: strategic thinker, strong communicator, able to explain complex ideas to different audiences and drive people forward.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563036
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Merkaz
Job Type: Full Time
Youll work in an awesome environment alongside some of the most innovative people in the industry, using cutting-edge technologies and tools (video editing, Gen AI, data, etc.). At our company, you have the opportunity to directly influence the products and tools used by our clients, including sports giants such as the NBA, Bundesliga, LaLiga, ESPN - and thats just the beginning of what our company has to offer! Join us and be a part of the best team in tech as we Fuel the Fandom worldwide.
What youll do:
Design and develop scalable, efficient services, APIs, and automations as founder of this area
Build data pipelines, ETL processes, and integrations that power products and internal tools
Own features end-to-end: design, implementation, CI/CD, monitoring, and observability
Develop and optimize data models, ensuring high-performance data infrastructure
Build and schedule workflows with python orchestrator and integrate with data platforms
Optimize performance, reliability, and security across services
Write high-quality, maintainable code in a rich data environment
Collaborate with Product, R&D, Operations, and Business to deliver impactful data-driven solutions
Work with SAAS environments and containerized deployments.
Requirements:
BSc in Computer Science or an equivalent background
5+ years of professional experience as a Python Developer / Software Engineer
Strong proficiency in Python; experience with frameworks such as FastAPI, Django, or Flask
Solid software engineering fundamentals: OOP, design patterns, testing, debugging
Strong SQL skills and experience with analytical databases (Snowflake or similar)
Experience with REST APIs and asynchronous programming
Proficiency with Git and CI/CD practices
Experience with Docker and/or Kubernetes
Experience with Airflow - advantage
Excellent communication and collaboration skills with the ability to work across multiple stakeholders and business units
Node js - experience is advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8562977
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers.
The Responsibilities
Conduct hands-on research and development of cutting-edge models
Take research projects from ideation to deployment and production
Work closely with domain experts and the engineering department to improve and create new innovative solutions.
Help define the AI product roadmap.
Requirements:
5+ Years as a Data Scientist with a proven production experience
Experience in conducting applied research
Collaborative teammate with great communication skills
Strong programming skills in Python
Strong problem solving skills
Curiosity and the passion to learn about new technologies and challenges
Advantageous: Familiarity with modern software engineering practices and tools
Advantageous: cyber security background.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561488
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
It starts with you - an engineer driven to build modern, real-time data platforms that help teams move faster with trust. You care about great service, performance, and cost. Youll architect and ship a top-of-the-line open streaming data lake/lakehouse and data stack, turning massive threat signals into intuitive, self-serve data and fast retrieval for humans and AI agents - powering a unified foundation for AI-driven mission-critical workflows across cloud and on-prem.
If you want to make a meaningful impact, join our companys mission and build best-in-class data systems that move the world forward - this role is for you.
The Responsibilities
Build self-serve platform surfaces (APIs, specs, CLI/UI) for streaming and batch pipelines with correctness, safe replay/backfills, and CDC.
Run the open data lake/lakehouse across cloud and on-prem; enable schema evolution and time travel; tune partitioning and compaction to balance latency, freshness, and cost.
Provide serving and storage across real-time OLAP, OLTP, document engines, and vector databases.
Own the data layer for AI - trusted datasets for training and inference, feature and embedding storage, RAG-ready collections, and foundational building blocks that accelerate AI development across the organization.
Enable AI-native capabilities - support agentic pipelines, self-tuning processes, and secure sandboxing for model experimentation and deployment.
Make catalog, lineage, observability, and governance first-class - with clear ownership, freshness SLAs, and access controls.
Improve performance and cost by tuning runtimes and I/O, profiling bottlenecks, planning capacity, and keeping spend predictable.
Ship paved-road tooling - shared libraries, templates, CI/CD, IaC, and runbooks - while collaborating across AI, ML, Data Science, Engineering, Product, and DevOps. Own architecture, documentation, and operations end-to-end.
Requirements:
6+ years in software engineering, data engineering, platform engineering, or distributed systems, with hands-on experience building and operating data infrastructure at scale.
Streaming & ingestion - Technologies like Flink, Structured Streaming, Kafka, Debezium, Spark, dbt, Airflow/Dagster
Open data lake/lakehouse - Table formats like Iceberg, Delta, or Hudi; columnar formats; partitioning, compaction, schema evolution, time-travel
Serving & retrieval - OLAP engines like ClickHouse or Trino; vector databases like Milvus, Qdrant, or LanceDB; low-latency stores like Redis, ScyllaDB, or DynamoDB
Databases - OLTP systems like Postgres or MySQL; document/search engines like MongoDB or ElasticSearch; serialization with Avro/Protobuf; warehouse patterns
Platform & infra - Kubernetes, AWS, Terraform or similar IaC, CI/CD, observability, incident response
Performance & cost - JVM tuning, query optimization, capacity planning, compute/storage cost modeling
Engineering craft - Java/Scala/Python, testing, secure coding, AI coding tools like Cursor, Claude Code, or Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561478
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers.
The Responsibilities
Conduct hands-on research and development of state-of-the-art models, algorithms, and systems.
Lead research projects end-to-end from problem formulation, ideation, and experimental design to prototyping and transition into production.
Explore new methodologies and develop creative approaches to solve complex challenges.
Identify ambitious research directions and invent novel approaches to solve complex, high-impact problems.
Shape the research roadmap and influence long-term technical strategy across the organization.
Requirements:
5+ Years as a Data Scientist with proven production-level impact.
Masters degree in Computer Science, Mathematics, Statistics or Engineering (PhD is a plus but not required).
Experience conducting applied research and working with Transformers, open-source LLMs, or other advanced deep learning architectures.
Strong programming skills in Python and familiarity with modern ML tooling and frameworks.
Excellent problem-solving abilities and a strong experimental mindset.
Effective collaborator with strong communication skills.
High curiosity and a passion for learning new technologies, methods, and domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561459
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, we redefine cyber defense vision by combining AI and human expertise to create products that protect nations and critical infrastructure. This is more than a job; its a Dream job. we are where we tackle real-world challenges, redefine AI and security, and make the digital world safer. Lets build something extraordinary together.
our company'sAI cybersecurity platform applies a new, out-of-the-ordinary, multi-layered approach, covering endless and evolving security challenges across the entire infrastructure of the most critical and sensitive networks. Central to our company's proprietary Cyber Language Models are innovative technologies that provide contextual intelligence for the future of cybersecurity.
At our company, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers.
The Responsibilities
Conduct cutting-edge research in natural language processing and large language models.
Design, train, and optimize large-scale neural network models for advanced applications.
Transition research projects from ideation through deployment and scaling.
Collaborate closely with cross-functional teams, including domain experts, product managers, and engineers, to deliver impactful AI solutions.
Define and contribute to the AI and NLP product roadmap.
Requirements:
M.Sc. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field.
5+ years of experience in applied AI research.
Strong programming skills, particularly in Python and ML frameworks (e.g., TensorFlow, PyTorch).
Solid understanding of NLP.
Experience with modern Large Language Models (LLM) and generative models.
Proven expertise in designing, implementing, and evaluating deep learning models in a production environment.
Preferred Qualifications
Experience with training Large Generative Language Models (LLM).
Knowledge of distributed computing and infrastructure for training large models.
Interest in exploring novel architectures for LLMs.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561394
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Scientist who will join our growing Detection group. You'll be a significant part of the development of our state-of-the-art anomaly detection models to find and protect against nation-sponsored cyber-attacks. In this role, you will work with product, engineering, and cyber teams to train, evaluate, and deploy anomaly detection models on a massive scale.
The Responsibilities
Analyze, transform and clean large, complex data sets from various sources to ensure data quality and integrity for analysis.
Conduct hands-on research and development of state-of-the-art models and algorithms.
Extract relevant features from structured and unstructured data sources, design and engineer new features and feature selection methodologies to enhance model performance.
Build, train, and optimize machine learning models using state-of-the-art techniques, and evaluate model performance using appropriate metrics.
Lead research projects end-to-end from problem formulation, ideation, and experimental design to prototyping and transition into production.
Explore new methodologies and develop creative approaches to solve complex challenges.
Requirements:
5+ Years as a Data Scientist with proven production-level impact.
Master's degree in computer science, mathematics or Engineering with focus on machine learning.
Proven track record designing and training anomaly-based models for large datasets.
Strong programming skills in Python and familiarity with modern ML tooling and frameworks.
Experience conducting applied research and working with transformers, open-source LLMs, or other advanced deep learning architectures.
Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environment.
Excellent problem-solving abilities and a strong experimental mindset.
Effective collaborator with strong communication skills.
Curiosity and a passion for learning new technologies, methods, and domains.
Background in the cyber security domain - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561201
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560110
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560108
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Machine Learning Scientist I - GenAI Applications
26992
About the team:
This opening is for the GenAI Applications Team within the Data & AI Marketplace department.
The GenAI Applications team is responsible for designing and delivering agentic, ML-powered solutions for some of our most impactful products, including booking search experiences, trip planning, and trip helpfulness. The team builds AI-driven applications and conversational agents, such as chatbots and intelligent assistants, that significantly enhance the end-to-end customer experience.
Role Description:
As a Senior Machine Learning Scientist, you will work closely with engineers and to design, develop, and evaluate machine learning solutions for scalable, customer-facing GenAI applications. Your work will focus on researching, training, fine-tuning, and rigorously evaluating models leveraging LLMs, recommendation systems, and agent-based architectures, using state-of-the-art techniques. You will drive experimentation, define success metrics, and translate insights into impactful AI solutions that shape the future of intelligent travel products.
Key Job Responsibilities and Duties:
Explore and apply state-of-the-art techniques in multimodal machine learning.
Train innovative ML models (NLP, CV, LLM-finetuning), build algorithms, and engineering approaches to drive business impact..
Coding skills: ensure implementation of reusable frameworks (clean and scalable code).
Conduct data analysis with detailed metrics to evaluate models performance, labels quality, features exploration.
Work closely with machine learning engineers to ensure the model's latency/throughput meets product requirements and ensure deployment of your model to production.
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions.
Requirements:
Advanced knowledge and experience in Computer Vision and Natural Language Processing, engineering aspects of developing ML and GenerativeAI models at scale.
Experience designing and executing end-to-end research and development plans and generating impact through large-scale machine learning model development. Preferably evidenced by peer-reviewed publication, patents, open sourced code or the like.
Relevant work or academic experience (MSc + 6 years of working experience, or PhD + 4 years of working experience), involved in the application of Machine Learning to business problems.
Masters degree, PhD or equivalent experience in a quantitative field (e.g. Computer Science, Engineering Mathematics, Artificial Intelligence, Physics, etc.).
Experience on multiple machine learning facets: working with large data sets, model development, statistics, experimentation, data visualization, optimization, software development.
Experience collaborating cross functionally in the development of machine learning products (e.g. Developers, UX specialists, Product Managers, etc.).
Strong working knowledge of Python, Java, Kafka, Hadoop, SQL, and Spark or similar technologies. Working experience with version control systems.
Excellent English communication skills, both written and verbal.
Successfully driving technical, business and people related initiatives that improve productivity, performance and quality while communicating with stakeholders at all levels
Leading by example, gaining respect through actions, not your title. Developing your team and motivating them to achieve their goals. Providing feedback timely and managing your key team performance indicators.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560103
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Netanya
Job Type: Full Time
We are looking for a senior Data Scientist to join a team of passionate engineers who build premium solutions at scale. Every day, we tackle ambiguous and stimulating challenges. We build and operate large-scale distributed machine learning systems that process 1B predictions per second during real-time auctions.
What will you do?
As a Senior Data Scientist, your mission will be to:
Improve existing algorithms & tools allowing to explore and analyze more and more data and provide accurate feedback on the activities.
Develop new algorithms & new approaches to provide accurate predictions and drive new products by providing business insights.
Implement your algorithms and models end to end.
Collaborate with a variety of teams to develop services from design to production.
Make sure the software is in good hands by writing, running and automating tests (unit, functional, load...).
Keep up to date with the latest Machine Learning technologies to make sure we always use the best class algorithms according to the context.
Shape how billions of people discover and enjoy premium content by improving ad relevance, quality, and efficiency across Teads global publisher network.
What will you bring to the team?
Experience in Statistics (i.e. statistical analysis, regression analysis, ) and/or Artificial Intelligence (i.e. Data Mining, Machine Learning, ).
An appetite for Data Science applied to high-volumetry, low-latency topics.
Ability to read scientific articles, to analyze critically, and to implement as appropriate.
Good programming abilities.
True scientist skills: fast learner, curious, sense of details, rigorous.
Strong communication skills, working collaboratively with the team, able to teach concepts, and communicate clearly to a wide audience of complicated topics.
Strong problem solving skills, and deducing from specific problems wider range products.
You are very mindful about your application architecture, performance, testing and maintainability and its overall quality.
Requirements:
5+ years of hands-on experience with coding ML based solutions in high scale systems
Relevant industry experience / publications in ad-tech and recommender system.
Msc/Phd in computer science / math.
Industry experience in building engineering at scale.
Large-scale distributed systems, service oriented architecture.
Previous experience with all or some of the elements of our Stack (we mainly use Go, Python, Java, AWS, Jupyter notebooks).
Knowledge in data engineering.
Ability to transform raw data into actionable business insights.
Please submit your CV in English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8559989
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Staff Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Staff Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Staff Algo Data Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8559783
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו