רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and hands-on Backend Engineer to be a key player in building high-scale Data Platforms and Products for our business teams.
This role involves working with large datasets and scalable systems, and developing internal tools to enable data-driven decision-making across the company.
Key Responsibilities:
Develop internal tools for various teams.
Build and maintain microservices and APIs to support diverse workflows.
Operate in a real-time, event-driven environment.
Create and manage data pipelines.
Take ownership of multiple systems and products.
Develop and deploy machine learning pipelines to production in an event-driven architecture.
Work in a multi-cloud environment (Azure/GCP/AWS).
Integrate third-party tools with our platform.
Translate business requirements into technical specifications.
Our Tech Stack:
Python, BigQuery, Redis, RabbitMQ, MySQL, Tornado, SQLAlchemy, Airflow, Airbyte, NewRelic, Elastic, Kubernetes (K8S).
Requirements:
Experience: Minimum 5 years as a Backend Engineer.
Proficiency in Python: At least 5 years of experience, or expertise in an equivalent programming language.
Microservices and APIs: Proven experience in writing and maintaining microservices and REST APIs.
SQL Expertise: Strong proficiency in SQL.
Event-Driven Development: Hands-on experience with event-based development.
Big Data Experience: Familiarity with big data and high-velocity/volume systems is a plus.
Cloud Environments: Experience with multi-cloud environments (Azure, GCP, AWS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600293
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Petah Tikva
Job Type: Full Time
Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using diverse infra and multi-stage pipelines with intricate data structures and advanced queries, and complex BI.

A bit about our infrastructure. Our main databases are Snowflake, Iceberg on AWS, and Trino. Spark on EMR processes the huge influx of data. Airflow does most of the ETL.

The data we deliver drives insights both for internal and external customers. Our internal customers use it routinely for decision-making across the organization, such enhancing our product offerings.

What Youll Do
Build, maintain, and optimize data infrastructure.
Contribute to the evolution of our AWS-based infrastructure.
Work with database technologies - Snowflake, Iceberg, Trino, Athena, and Glue.
Utilize Airflow, Spark, Kubernetes, ArgoCD and AWS.
Provide AI tools to ease data access for our customers.
Integrate external tools such as for anomaly detection or data sources ingestion.
Use AI to accelerate your development.
Assures the quality of the infra by employed QA automation methods.
Requirements:
5+ years of experience as a Data Engineer, or Backend Developer.
Experience with Big Data and cloud-based environments, preferably AWS.
Experience with Spark and Airflow.
Experience with Snowflake, Databrick, BigQuery or Iceberg.
Strong development experience in Python.
Knowledge of Scala for Spark is a plus.
A team player that care about the team, the service, and his customers
Strong analytical skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600292
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/04/2026
Location: Haifa
Job Type: Full Time
seeking a skilled and motivated Data Scientist. This individual will play a pivotal role in identifying key data points for collection, developing strategies to accumulate data and deriving actionable insights an anomaly based on a solid foundation of relevant know-how. Also, will also be responsible for creating, testing, and deploying scripts and methods for data collection and analysis to support decision-making. The Engineer will collaborate with cross-functional teams to identify critical data sources to determine the most effective data collection strategies, will develop automated and scalable data collection pipelines, will ensure data quality, integrity, and consistency across all sources and may use AI techniques to refine the results toward failures predictions.
Requirements:
Bachelors degree in computer science, Data Science, Engineering, Mathematics, or a related field.
Advanced degrees in data science or Machine learning / AI - Advance.
Proficiency in programming languages such as Python, R, or MATLAB.
Strong understanding of data manipulation and analysis tools (e.g., Pandas, NumPy, SQL).
Understanding of high speed interfaces such as Ethernet, PCI-E , WiFi.
Experience with data visualization tools such as Tableau, Matplotlib, Graphana.
Strong analytical and critical-thinking skills to identify patterns and outliers.
Customer-obsession, Think and act with the customer in mind!
Goal-driven, Self-motivated, be able to work independently and with teams with people around the globe.
Entrepreneurial, open-minded behavior and can-do attitude.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599334
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Ramat Gan
Job Type: Full Time
We are looking for a DataOps Engineer to own the infrastructure that powers our large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure - you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.
You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you.
RESPONSIBILITIES:
Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
Own the reliability, performance, and cost-efficiency of the data platform - including SLAs, autoscaling, resource quotas, and workload isolation
Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
Develop observability tooling - metrics, logging, alerting, and data quality dashboards - to proactively surface issues across the pipeline stack
Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
Drive platform improvements end-to-end: from design through deployment and ongoing ownership.
Requirements:
5+ years of experience in a production infrastructure, SRE, or DevOps role
Strong Kubernetes experience, autoscaling, resource management, and the broader K8s ecosystem
2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
Proficiency in at least one general-purpose language - Python or Go preferred
Experience with workflow orchestration tools, particularly Apache Airflow
Solid understanding of cloud infrastructure - GCP preferred (GCS, GKE, IAM)
Strong observability skills: metrics pipelines, structured logging, alerting frameworks
OTHER REQUIREMENTS:
Hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production
Familiarity with Delta Lake, Parquet, and columnar storage formats
Experience with data quality frameworks and pipeline lineage tooling
Knowledge of query optimization, partition strategies, and Spark performance tuning
Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599274
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Were looking for a Senior Data Architect to help shape how high-growth startups build and scale on AWS. In this role, youll design and deliver end-to-end data and analytics solutions - from architecture and pipelines to visualization and insights - guiding customers from concept through production. Youll work closely with startup founders, technical leaders, and account executives to create scalable, cost-efficient architectures that drive real business impact.
Work location - hybrid from Tel Aviv
If you are interested in this opportunity, please submit your CV in English.
Key Responsibilities
Design, develop, and implement data & analytics solutions to meet business requirements and create cost-efficient, highly available, and scalable customer solutions, including Well-Architected reviews and SoW.
Research and analyze current solutions and initiate improvement plans.
Collaborate with other engineers and stakeholders to ensure solutions are designed and developed according to best practices.
Lead workshops, POCs, and architecture reviews with startup customers, conferences, webinars, and more.
Stay up to date on Data Engineering and Analytics trends and contribute to internal enablement.
Frequent travels - locally (on-demand to meet with customers and partners and attend local events) and abroad (at least once a quarter).
Requirements:
3+ years of hands-on experience in AWS, including solution design, migration, and maintenance
2+ years in customer-facing technical roles (e.g., SRE, Cloud Architect, Customer Engineer)
Production experience with AWS infrastructure, data services, and real-time data processing
Proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, DynamoDB)
Skilled in AWS analytics tools (Glue, Athena, Redshift, EMR, Kinesis, MSK, QuickSight, dbt)
Understanding of information security best practices
Strong verbal and written communication in English and local language
Ability to lead end-to-end technical engagements and work in fast-paced environments
AWS Solutions Architect - Associate certification
Experience with Iceberg- an advantage
Experience with Kubernetes, CI/CD, and DevOps tools - an advantage
Experience with ETL processes, data lakes, and pipelines - an advantage
Experience writing SOWs, HLDs, and effort estimates - an advantage
AWS Professional or Data Analytics/Data Engineer certifications - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599151
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598652
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598573
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from our office.

We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.

This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.

What Youll Do

Architecture & Strategy

Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.

Engineering & Delivery

Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.

Leadership & Collaboration

Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
What Were Looking For:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams.

Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598137
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Ready to lead the way in building our next-gen data platforms? Join us and shape the future of secure connectivity!
We are looking for a Data Engineering Team Leader with deep expertise in building and managing data pipelines and streaming architecture.
Job Id: 24787
This role is ideal for an experienced and proactive leader with strong technical skills in distributed systems and data platforms. You will drive the architecture, design, and development of scalable data ingestion and processing solutions. This is an exciting opportunity to join a growing product in an enterprise environment with significant impact and room for professional growth.
This job is located in Tel Aviv (hybrid).
About Us:
Were creating the industrys leading SASE platform, merging advanced security with seamless connectivity. Our mission is to empower businesses to thrive in a cloud-first world, and data is at the heart of this transformation.
Key Responsibilities:
Inspire and mentor a top-tier data engineering team to deliver mission-critical solutions
Architect and optimize data ingestion, enrichment, and storage for massive scale and reliability
Collaborate with cross-functional teams to ensure seamless integration and data availability
Define best practices and enforce engineering excellence across the data domain.
Requirements:
4+ years of hands-on experience in data engineering, with strong knowledge of streaming technologies (Kafka/MSK, Flink) and distributed systems on AWS
2+ years of leadership experience in data engineering or related fields.
Strong development skills in Java and deep understanding of data modeling, ETL, and real-time analytics
Experience in developing and maintain a multi-tenant SaaS solution on top of AWS
Experience with React - advantage
A natural leader with strong communication skills and a can-do, hands-on approach.
BSc in computer science/software engineering (or equivalent).
Fluent English (written & spoken).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597474
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 25316
Why Join Us?
We are building next-generation GenAI security intelligence and SaaS Security Posture Management (SSPM) solutions that protect enterprises worldwide. If you enjoy turning complex security data into actionable insights and delivering end-to-end systems, this role is for you.
About the role:
You will own, build, and maintain our Pythonic data pipeline and enrichment system on top of PostgreSQL and BigQuery. This system powers security analytics, detections, and intelligence. A core part of your job will be to design and implement new components, improve reliability and performance, and ensure data quality and observability.
Key Responsibilities:
Own, build, and maintain production data pipelines and enrichment services using Python, PostgreSQL, and BigQuery.
Architect data systems end to end, including design, deployment, monitoring, and iterative improvement.
Analyze complex security datasets and SaaS telemetry to uncover risks, patterns, and opportunities.
Research emerging threat vectors and contribute to automated intelligence feeds and published reports.
Work across security domains such as SSPM, Shadow Integrations, DLP, and GenAI Protection.
Requirements:
4+ years in data-focused roles (engineering, analytics, science)
Strong SQL and Python skills
Experience with cloud platforms (GCP, AWS, Azure) and modern data warehouses (BigQuery, Databricks)
Proven ability to build data infrastructure from scratch
Ability to turn complex data into actionable insights
Fast learner with systematic problem-solving skills
Comfortable with technical research in unfamiliar domains
Independent and determined, with strong collaboration skills
BSc in Computer Science, Mathematics, Statistics, or related field
Excellent communication skills for technical and non-technical audiences.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597109
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Our Senior Data Engineer will play an essential role by building the underlying infrastructures, collecting, storing, processing and analyzing large sets of data, while collaborating with researchers, architects, and engineers, in order to design and build high-quality data processing for our flows.

In this role, you are responsible for end-to-end development of the data pipeline and data models, working with major data flow that includes structured and unstructured data. You will also hold responsibility for operating parts of our production system. Your focus will be on developing and integrating systems that retrieve and analyzing data that influence people's lives. This role for our Tel Aviv office is a hybrid role working at least two days per week in the office.
Requirements:
The ideal candidate will be:
A technology enthusiast - who loves data and get shiver excitement from tech innovations.
Desire to know how things work and a greater desire to improve them.
Intellectual curiosity to find unusual ways to solve problems.
Comfortable taking on challenges and learning new technologies.
Comfortable working in a fast-paced dynamic environment.

Qualifications:
6+ years of experience in designing and implementing server-side Data solutions.
Highly experienced with CI/CD pipelines and using Terraform in data platforms.
Highly experienced with Spark and Python.
Experience with AWS ecosystem.
Experience with DWH solutions (e.g. Snowflake, Redshift, Databricks).
Experience with Kubernetes in Production.
Experience implementing GenAI into data flows - Advantage.
Experience with Apache Airflow - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597055
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
Job Id: 25380
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture with an AI first attitude.
The ideal candidate is experienced in leveraging AI tools as part of modern data pipeline development, enabling scalable solutions, accelerating delivery, and continuously exploring new approaches and technologies.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.
This is a unique opportunity to join a team full of outstanding people making a big impact on us.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.
This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.
Key Responsibilities
Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
Create data infrastructure for various R&D groups across the organization to support product development and optimization
Work with data experts to assist with technical data-related issues and support infrastructure needs
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
Incorporate AI-assisted tools into data pipeline design, development, and optimization to improve efficiency, scalability, and innovation
Requirements:
B.Sc. in Engineering or a related field
3+ years of experience as a Data Engineer working on production systems
Advanced SQL knowledge and experience with relational databases
Proven experience using Python
Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
Experience in creating and maintaining ETL/ELT processes
Strong project management and organizational skills
Strong collaboration skills with both technical (R&D) and non-technical (business) teams
Experience using AI tools as part of the data engineering workflow, with a mindset of experimentation, working at scale, and exploring new technologies
Advantage: Azure data services, Databricks, EventHub, and Spark.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597003
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were hiring a Machine Learning Engineering Manager to guide and grow a high-impact ML team driving AI-powered innovation across Stamplis B2B SaaS platform. Youll lead the design and delivery of AI solutions while mentoring engineers and setting the technical direction for AI-first development at scale.

This is a leadership role with a balance of hands-on engineering and team management, perfect for someone who thrives on solving technical challenges, inspiring a team, and shaping the future of AI in fintech automation.

What You Will Do:
Lead & Mentor: Manage, mentor, and grow a team of ML engineers, fostering technical excellence and career development.
Set Technical Direction: Define the ML strategy, ensuring best practices in architecture, frameworks, and operationalization.
Build and deploy AI-based solutions: Oversee the development and deployment of GenAI/LLM-powered solutions that address real-world challenges across our products.
Scale & Operationalize: Establish scalable ML infrastructure, CI/CD, observability, and data pipelines for high-availability production systems.
Collaborate Cross-Functionally: Partner with product managers, engineers, and business stakeholders, clearly communicate progress, challenges, and outcomes.
Requirements:
7+ years of experience as a Backend Developer / Data Engineer / ML Engineer.
3+ years in a technical leadership role.
Python (Java as an advantage).
Bachelors degree in Computer Science or related STEM field (Masters preferred).
Proven track record of building and deploying AI-based solutions at scale.
Deep expertise with LLMs and ML frameworks (e.g., LangChain, LangGraph, Hugging Face, TensorFlow, PyTorch).
Strong background in system design, cloud-native architecture, and microservices.
Experience with NoSQL and real-time data processing pipelines.
Exceptional leadership, mentorship, and communication skills.
Strategic mindset with the ability to balance hands-on coding and team leadership.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8596972
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו