רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

משרות בלוח החם
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Tel Aviv-Yafo
e are looking for a data Engineer expert with innovative thinking, initiative, and strong technological drive - someone who can take a business need and turn it into a smart and precise technological solution, from ideation to implementation.

This is an independent, end-to-end role that includes responsibility for designing, planning, and implementing data solutions in a cloud environment (primarily AWS), building data infrastructure and pipelines, and providing Technical Support to clients.
Requirements:
3-5 years of experience in designing, architecting, developing, and implementing end-to-end data solutions.
Experience in building a data Warehouse ( DWH ) from scratch - including data modeling, loading processes, and architecture.
At least 2 years of hands-on experience with AWS/AZURE based data technologies.
Experience in building and maintaining advanced data pipelines from various data sources.
Significant experience in designing, developing, and maintaining ETL processes.
Deep understanding of infrastructure, information security, and cloud architectures.
Experience working with clients/business teams - including gathering requirements and leading the technological solution.
Familiarity with common BI tools such as Power BI,
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8526604
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
משרה בלעדית
2 ימים
דרושים בSQLink
סוג משרה: משרה מלאה
ארגון פיננסי באזור המרכז מגייס data Engineer
התפקיד כולל: בנייה והובלה של תשתיות דאטה מתקדמות לצורך פיתוח, אימון והסקה של מודלים, פיתוח תהליכי ETL /ELT ועבודה עם Snowflake, יצירת data Pipelines ופאנלים דינמיים, חיבור למקורות מידע ארגוניים וניתוח נתונים, עבודה שוטפת עם data Scientists, אנליסטים וצוותי MLOps, הובלת תהליכי דאטה מקצה לקצה בסביבה טכנולוגית מתקדמת ועוד.
דרישות:
- 3 שנות ניסיון כ- data Engineer
- ניסיון ב-SQL וכתיבה אופטימלית לביצועים
- ניסיון בפיתוח תהליכי ETL /ELT ובניית data Pipelines
- ניסיון בעבודה עם Snowflake
- ניסיון עם DBT או Feature Store - יתרון המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8582116
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים באתוסיה חברת השמה לתחום ההיי טק וביוטק
Location: Tel Aviv-Yafo
:The Project Manager of the Global Strategy Planning (S P) Team plays a pivotal role in supporting the Head of Global Internal Audit S P in managing audit systems and enhancing business intelligence ( BI ) capabilities. This role focuses on optimizing audit tools, data governance, and leveraging BI and AI solutions to enable data -driven insights, improve operational performance, and ensure system functionality.

Required Education
3rd year student (end of 2nd year / beginning of 3rd year) in:
o Industrial Management Engineering
o Information Systems / IT Engineering
From a recognized Israeli or international academic institution
Requirements:
Required Skills Experience

Strong organizational, planning, and time management skills
Self driven, proactive, with the ability to manage multiple tasks and projects
Excellent analytical thinking and attention to detail
Strong computer skills, including advanced use of MS Office (Excel, PowerPoint, Word)
Ability to gather, analyze, and synthesize data accurately and efficiently
Exposure to AI tools, including:
o Writing effective prompts for AI / copilots
o Working with AI?based agents or automation workflows
Fluent written and spoken English, including ability to draft professional documents
Advantages (Nice to Have)
Experience with data analytics, automation, or reporting tools
Familiarity with advanced information systems ( ERP, GRC, BI tools, etc.)
Experience working in a large, global organization, preferably in pharma, life sciences, or manufacturing
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8597211
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים בNishapro
סוג משרה: משרה מלאה
דרוש/ה טכנולוג/ית דאטה סקרן/ית, יצירתי/ת וחד/ת מחשבה, שמחובר/ת לעולמות הדאטה והעסק, ומסוגל/ת לקחת רעיון ולהפוך אותו לפתרון טכנולוגי ישים -

בין אם באמצעות Informatica, Python או כל כלי אחר. התפקיד כולל עבודה בצוות דאטה מוביל, בסביבה דינמית ומתקדמת, עם דגש על חדשנות, חשיבה מחוץ לקופסה ופתרון בעיות מורכבות.
דרישות:
ניסיון של 5+ שנים בפיתוח פתרונות דאטה - חובה.

שליטה ב- Python - חובה.

ניסיון בעבודה עם בסיסי נתונים (SQL) - חובה.

הבנה עסקית ויכולת לתרגם צרכים עסקיים לפתרונות טכנולוגיים


מאפייני אישיות:

אהבה ללמידה, סקרנות טכנולוגית ויכולת עבודה עצמאית

יצירתיות, ראש גדול ויכולת פתרון בעיות מורכבות

גמישות מחשבתית המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8519703
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים בTechTalent
סוג משרה: משרה מלאה
לחברת מוצר בתחום ה-Fintech דרוש/ה Senior data engineer.
במסגרת התפקיד: עבודה שכוללת פיתוח בעולמות ה- data, שימוש ב- Python
שימוש ב-Airflow כלי data מתקדמים וכן עבודה בצוות חוד בחברה.
דרישות:
תואר ראשון טכנולוגי
לפחות 6 שנות ניסיון בעולמות data - חובה
רקע בפיתוח backend - יתרון אך לא חובה
ניסיון ב- Python - חובה
ניסיון ב-Air Flow
ניסיון בכלי data נוספים
אנגלית טובה מאוד המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8580111
סגור
שירות זה פתוח ללקוחות VIP בלבד
לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
3 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
we are where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Were looking for a Senior Data Architect to help shape how high-growth startups build and scale on AWS. In this role, youll design and deliver end-to-end data and analytics solutions - from architecture and pipelines to visualization and insights - guiding customers from concept through production. Youll work closely with startup founders, technical leaders, and account executives to create scalable, cost-efficient architectures that drive real business impact.
Work location - hybrid from Tel Aviv
If you are interested in this opportunity, please submit your CV in English.
Key Responsibilities
Design, develop, and implement data & analytics solutions to meet business requirements and create cost-efficient, highly available, and scalable customer solutions, including Well-Architected reviews and SoW.
Research and analyze current solutions and initiate improvement plans.
Collaborate with other engineers and stakeholders to ensure solutions are designed and developed according to best practices.
Lead workshops, POCs, and architecture reviews with startup customers, conferences, webinars, and more.
Stay up to date on Data Engineering and Analytics trends and contribute to internal enablement.
Frequent travels - locally (on-demand to meet with customers and partners and attend local events) and abroad (at least once a quarter).
Requirements:
3+ years of hands-on experience in AWS, including solution design, migration, and maintenance
2+ years in customer-facing technical roles (e.g., SRE, Cloud Architect, Customer Engineer)
Production experience with AWS infrastructure, data services, and real-time data processing
Proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, DynamoDB)
Skilled in AWS analytics tools (Glue, Athena, Redshift, EMR, Kinesis, MSK, QuickSight, dbt)
Understanding of information security best practices
Strong verbal and written communication in English and local language
Ability to lead end-to-end technical engagements and work in fast-paced environments
AWS Solutions Architect - Associate certification
Experience with Iceberg- an advantage
Experience with Kubernetes, CI/CD, and DevOps tools - an advantage
Experience with ETL processes, data lakes, and pipelines - an advantage
Experience writing SOWs, HLDs, and effort estimates - an advantage
AWS Professional or Data Analytics/Data Engineer certifications - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599151
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598652
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598573
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from our office.

We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.

This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.

What Youll Do

Architecture & Strategy

Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.

Engineering & Delivery

Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.

Leadership & Collaboration

Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
What Were Looking For:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams.

Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598137
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Ready to lead the way in building our next-gen data platforms? Join us and shape the future of secure connectivity!
We are looking for a Data Engineering Team Leader with deep expertise in building and managing data pipelines and streaming architecture.
Job Id: 24787
This role is ideal for an experienced and proactive leader with strong technical skills in distributed systems and data platforms. You will drive the architecture, design, and development of scalable data ingestion and processing solutions. This is an exciting opportunity to join a growing product in an enterprise environment with significant impact and room for professional growth.
This job is located in Tel Aviv (hybrid).
About Us:
Were creating the industrys leading SASE platform, merging advanced security with seamless connectivity. Our mission is to empower businesses to thrive in a cloud-first world, and data is at the heart of this transformation.
Key Responsibilities:
Inspire and mentor a top-tier data engineering team to deliver mission-critical solutions
Architect and optimize data ingestion, enrichment, and storage for massive scale and reliability
Collaborate with cross-functional teams to ensure seamless integration and data availability
Define best practices and enforce engineering excellence across the data domain.
Requirements:
4+ years of hands-on experience in data engineering, with strong knowledge of streaming technologies (Kafka/MSK, Flink) and distributed systems on AWS
2+ years of leadership experience in data engineering or related fields.
Strong development skills in Java and deep understanding of data modeling, ETL, and real-time analytics
Experience in developing and maintain a multi-tenant SaaS solution on top of AWS
Experience with React - advantage
A natural leader with strong communication skills and a can-do, hands-on approach.
BSc in computer science/software engineering (or equivalent).
Fluent English (written & spoken).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597474
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We're seeking an outstanding and passionate Data Platform Engineer to join our growing R&D team.

You will work in an energetic startup environment following Agile concepts and methodologies. Joining the company at this unique and exciting stage in our growth journey creates an exceptional opportunity to take part in shaping Finaloop's data infrastructure at the forefront of Fintech and AI.

What you'll do:

Design, build, and maintain scalable data pipelines and ETL processes for our financial data platform.

Develop and optimize data infrastructure to support real-time analytics and reporting.

Implement data governance, security, and privacy controls to ensure data quality and compliance.

Create and maintain documentation for data platforms and processes.

Collaborate with data scientists and analysts to deliver actionable insights to our customers.

Troubleshoot and resolve data infrastructure issues efficiently.

Monitor system performance and implement optimizations.

Stay current with emerging technologies and implement innovative solutions.
Requirements:
What you'll bring:

3+ years experience in data engineering or platform engineering roles.

Strong programming skills in Python and SQL.

Experience with orchestration platforms like Airflow/Dagster/Temporal.

Experience with MPPs like Snowflake/Redshift/Databricks.

Hands-on experience with cloud platforms (AWS) and their data services.

Understanding of data modeling, data warehousing, and data lake concepts.

Ability to optimize data infrastructure for performance and reliability.

Experience working with containerization (Docker) in Kubernetes environments.

Familiarity with CI/CD concepts.

Fluent in English, both written and verbal.

And it would be great if you have (optional):

Experience with big data processing frameworks (Apache Spark, Hadoop).

Experience with stream processing technologies (Flink, Kafka, Kinesis).

Knowledge of infrastructure as code (Terraform).

Experience building analytics platforms.

Experience building clickstream pipelines.

Familiarity with machine learning workflows and MLOps.

Experience working in a startup environment or fintech industry.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8573265
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 25316
Why Join Us?
We are building next-generation GenAI security intelligence and SaaS Security Posture Management (SSPM) solutions that protect enterprises worldwide. If you enjoy turning complex security data into actionable insights and delivering end-to-end systems, this role is for you.
About the role:
You will own, build, and maintain our Pythonic data pipeline and enrichment system on top of PostgreSQL and BigQuery. This system powers security analytics, detections, and intelligence. A core part of your job will be to design and implement new components, improve reliability and performance, and ensure data quality and observability.
Key Responsibilities:
Own, build, and maintain production data pipelines and enrichment services using Python, PostgreSQL, and BigQuery.
Architect data systems end to end, including design, deployment, monitoring, and iterative improvement.
Analyze complex security datasets and SaaS telemetry to uncover risks, patterns, and opportunities.
Research emerging threat vectors and contribute to automated intelligence feeds and published reports.
Work across security domains such as SSPM, Shadow Integrations, DLP, and GenAI Protection.
Requirements:
4+ years in data-focused roles (engineering, analytics, science)
Strong SQL and Python skills
Experience with cloud platforms (GCP, AWS, Azure) and modern data warehouses (BigQuery, Databricks)
Proven ability to build data infrastructure from scratch
Ability to turn complex data into actionable insights
Fast learner with systematic problem-solving skills
Comfortable with technical research in unfamiliar domains
Independent and determined, with strong collaboration skills
BSc in Computer Science, Mathematics, Statistics, or related field
Excellent communication skills for technical and non-technical audiences.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597109
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Our Senior Data Engineer will play an essential role by building the underlying infrastructures, collecting, storing, processing and analyzing large sets of data, while collaborating with researchers, architects, and engineers, in order to design and build high-quality data processing for our flows.

In this role, you are responsible for end-to-end development of the data pipeline and data models, working with major data flow that includes structured and unstructured data. You will also hold responsibility for operating parts of our production system. Your focus will be on developing and integrating systems that retrieve and analyzing data that influence people's lives. This role for our Tel Aviv office is a hybrid role working at least two days per week in the office.
Requirements:
The ideal candidate will be:
A technology enthusiast - who loves data and get shiver excitement from tech innovations.
Desire to know how things work and a greater desire to improve them.
Intellectual curiosity to find unusual ways to solve problems.
Comfortable taking on challenges and learning new technologies.
Comfortable working in a fast-paced dynamic environment.

Qualifications:
6+ years of experience in designing and implementing server-side Data solutions.
Highly experienced with CI/CD pipelines and using Terraform in data platforms.
Highly experienced with Spark and Python.
Experience with AWS ecosystem.
Experience with DWH solutions (e.g. Snowflake, Redshift, Databricks).
Experience with Kubernetes in Production.
Experience implementing GenAI into data flows - Advantage.
Experience with Apache Airflow - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597055
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
Job Id: 25380
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture with an AI first attitude.
The ideal candidate is experienced in leveraging AI tools as part of modern data pipeline development, enabling scalable solutions, accelerating delivery, and continuously exploring new approaches and technologies.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.
This is a unique opportunity to join a team full of outstanding people making a big impact on us.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.
This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.
Key Responsibilities
Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
Create data infrastructure for various R&D groups across the organization to support product development and optimization
Work with data experts to assist with technical data-related issues and support infrastructure needs
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
Incorporate AI-assisted tools into data pipeline design, development, and optimization to improve efficiency, scalability, and innovation
Requirements:
B.Sc. in Engineering or a related field
3+ years of experience as a Data Engineer working on production systems
Advanced SQL knowledge and experience with relational databases
Proven experience using Python
Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
Experience in creating and maintaining ETL/ELT processes
Strong project management and organizational skills
Strong collaboration skills with both technical (R&D) and non-technical (business) teams
Experience using AI tools as part of the data engineering workflow, with a mindset of experimentation, working at scale, and exploring new technologies
Advantage: Azure data services, Databricks, EventHub, and Spark.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597003
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were hiring a Machine Learning Engineering Manager to guide and grow a high-impact ML team driving AI-powered innovation across Stamplis B2B SaaS platform. Youll lead the design and delivery of AI solutions while mentoring engineers and setting the technical direction for AI-first development at scale.

This is a leadership role with a balance of hands-on engineering and team management, perfect for someone who thrives on solving technical challenges, inspiring a team, and shaping the future of AI in fintech automation.

What You Will Do:
Lead & Mentor: Manage, mentor, and grow a team of ML engineers, fostering technical excellence and career development.
Set Technical Direction: Define the ML strategy, ensuring best practices in architecture, frameworks, and operationalization.
Build and deploy AI-based solutions: Oversee the development and deployment of GenAI/LLM-powered solutions that address real-world challenges across our products.
Scale & Operationalize: Establish scalable ML infrastructure, CI/CD, observability, and data pipelines for high-availability production systems.
Collaborate Cross-Functionally: Partner with product managers, engineers, and business stakeholders, clearly communicate progress, challenges, and outcomes.
Requirements:
7+ years of experience as a Backend Developer / Data Engineer / ML Engineer.
3+ years in a technical leadership role.
Python (Java as an advantage).
Bachelors degree in Computer Science or related STEM field (Masters preferred).
Proven track record of building and deploying AI-based solutions at scale.
Deep expertise with LLMs and ML frameworks (e.g., LangChain, LangGraph, Hugging Face, TensorFlow, PyTorch).
Strong background in system design, cloud-native architecture, and microservices.
Experience with NoSQL and real-time data processing pipelines.
Exceptional leadership, mentorship, and communication skills.
Strategic mindset with the ability to balance hands-on coding and team leadership.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8596972
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer to take ownership of our evolving data platform.

Our data environment is entering a significant growth phase. We are strengthening our Redshift-based warehouse, expanding our transformation capabilities with dbt, and investing in modern engineering standards across the stack.

In parallel, we are building a dedicated application layer for delivering data products. This role is responsible for designing and owning the data foundations that support it.

AI-assisted development is not a side experiment. It is a core part of how we engineer. We expect this role to actively leverage advanced AI development tools as part of daily work, accelerating design, implementation, validation and documentation while maintaining strong architectural judgment and production-level quality.

This position requires architectural thinking, long-term platform vision and the ability to lead complex initiatives from design through production in an AI-augmented engineering environment.

What You Will Do:

Own the architecture and evolution of our cloud-based Data Warehouse.

Lead complex data initiatives end to end, from requirements definition and technical design through implementation, deployment and post-production optimization.

Apply AI-assisted development workflows to improve engineering velocity, code quality and system reliability.

Design and evolve transformation standards and modeling practices using dbt as a strategic layer within the team.

Architect the data foundations that power our data-driven applications.

Translate business needs into structured, production-grade data solutions.

Drive technical standards, consistency and engineering discipline across the data stack.

Take full accountability for delivery, stability and long-term scalability of the systems you build.
Requirements:
8+ years of experience in Data Engineering with strong focus on modern cloud-based DWH architecture.

Proven experience leading projects end to end in production data environments.

Deep production experience with Redshift, dbt and orchestration frameworks such as Airflow.

Strong SQL expertise and solid understanding of dimensional modeling and transformation best practices.

Practical, daily experience using AI development tools as part of your engineering workflow.

Ability to critically evaluate AI-generated output and maintain high architectural and production standards.

Strong system design capabilities and full ownership mindset.

Experience supporting data products or application-facing data environments.

Ability to balance speed with quality and long-term architectural thinking.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8596952
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a skilled and motivated Data Engineer with expertise in Elasticsearch, cloud technologies, and Kafka. As a data engineer, you will be responsible for designing, building and maintaining scalable and efficient data pipelines that will support our organization's data processing needs.
The role will entail:
Design and develop data platforms based on Elasticsearch, Databricks, and Kafka
Build and maintain data pipelines that are efficient, reliable and scalable
Collaborate with cross-functional teams to identify data requirements and design solutions that meet those requirements
Write efficient and optimized code that can handle large volumes of data
Implement data quality checks to ensure accuracy and completeness of the data
Troubleshoot and resolve data pipeline issues in a timely manner
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field
3+ years of experience in data engineering
Expertise in Elasticsearch, cloud technologies (such as AWS, Azure, or GCP), Kafka and Databricks
Proficiency in programming languages such as Python, Java, or Scala
Experience with distributed systems, data warehousing and ETL processes
Experience with Container environment such AKS\EKS\OpenShift is a plus
high security clearance is a plus
The position is open for all genders as well as people with disabilities.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595875
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a skilled and motivated Data Engineer with expertise in Elasticsearch, cloud technologies, and Kafka. As a data engineer, you will be responsible for designing, building and maintaining scalable and efficient data pipelines that will support our organization's data processing needs.
The role will entail:
Design and develop data platforms based on Elasticsearch, Databricks, and Kafka
Build and maintain data pipelines that are efficient, reliable and scalable
Collaborate with cross-functional teams to identify data requirements and design solutions that meet those requirements
Write efficient and optimized code that can handle large volumes of data
Implement data quality checks to ensure accuracy and completeness of the data
Troubleshoot and resolve data pipeline issues in a timely manner
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field
3+ years of experience in data engineering
Expertise in Elasticsearch, cloud technologies (such as AWS, Azure, or GCP), Kafka and Databricks
Proficiency in programming languages such as Python, Java, or Scala
Experience with distributed systems, data warehousing and ETL processes
Experience with Container environment such AKS\EKS\OpenShift is a plus
high security clearance is a plus
The position is open for all genders as well as people with disabilities.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595874
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו
ישנן -54 משרות במרכז אשר לא צויינה בעבורן עיר הצג אותן >