רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

משרות בלוח החם
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים בComblack
מיקום המשרה: רמת גן
סוג משרה: משרה מלאה
חברת COMBLACK מגייסת data Engineering Tech Lead (Snowflake) לארגון פיננסי במרכז!
הזדמנות להוביל טכנולוגית פיתוח דאטה מתקדם בסביבת Snowflake, כולל ארכיטקטורה, ביצועים וניהול צוות.
דרישות:
ניסיון של 5+ שנים כ- data Engineer עם ניסיון משמעותי ב-Snowflake (לפחות 3 שנים)
ניסיון מעשי עם DBT, Python ו-Kafka
ניסיון ב-Performance Tuning בסביבות דאטה גדולות ומורכבות
ניסיון בניהול צוותים והובלת פיתוח במתודולוגיית Agile
ניסיון בעבודה עם הרשאות, אבטחת מידע (RBAC) ונתוני PII המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8600367
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים בIngima
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
1. הובלה טכנולוגית של פיתוח בסביבת Snowflake - תכנון ארכיטקטורת הפתרון, שימוש במתודולוגיית פיתוח מתאימה, טיפול בהיבטי ביצועים.
2. אחראי על ניהול תוכנית עבודה של צוות מפתחים תוך שימוש במתודולוגיית פיתוח אג'ילית.
3. אחריות על שימוש יעיל במשאבי המערכת וניהול תקציב (Finops).
4. אחריות על אימוץ פיצ'רים חדשים של המוצר, עם דגש על יכולות ה-AI, על פי צרכי הארגון.
5. אחריות על כל היבטי אבטחת מידע ורגולציה - ניהול הרשאות לפי קבוצות הרשאה (RBAC), מימוש הנחיות גישה לנתוני PII, ניהול סביבות עבודה שונות.
6. ניהול הקשר מול שותפים בצד הטכנולוגיות - ניהול ענן, צוות DBA, צוות רשת.
דרישות:
1. שנות ניסיון: ניסיון מעל 5 שנים כData Engineer. מתוך זה, לפחות 3 שנים של ניסיון משמעותי בסביבת Snowflake.
2. ידע מקצועי: Snowflake, DBT, Python, Kafka - חובה. Spark, JAVA - יתרון.
3. התמודדות עם אתגרי Tuning Performance בסביבות מרובות נתונים.
4. ניסיון במימוש מנגנון הרשאות עבור תרחישים עסקיים מאתגרים.
5. ניסיון ניהול פיתוח בשיטת אג'ייל. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8602662
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
דרושים בקבוצת Aman
מיקום המשרה: רמת גן
פיתוח ותחזוקה של תהליכים בסביבת ענן (Azure) עם DataBricks ו- DataFactory לעיבוד נתונים בקנה מידה גדול, אופטימיזציה של ביצועים וניהול נתונים בקבצי Parquet בענן.
עצמאות ועבודה שוטפת מול משתמשים.
קבלת הדרישה מהמשתמשים, כתיבת אפיונים טכניים וליווי הפיתוח כולל בדיקות, עלייה לייצור והטמעה מול המשתמשים.
הגדרת ממשקים מול מערכות בארגון ומחוצה לו
דרישות:
ניסיון של 3 שנים לפחות ב Python - חובה
ניסיון בכתיבת שאילתות SQL ברמה מורכבת - חובה
ניסיון בSpark - חובה
ניסיון בעבודה עם בסיסי נתונים גדולים, כולל אופטימיזציה של שאילתות - יתרון
ניסיון פיתוח בReact - יתרון
תואר ראשון במדעי המחשב - יתרון המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8376478
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
דרושים בMertens – Malam Team
חברת Mertens Malam Team מגייסת מפתח.ת Informatica ETL מנוסה לארגון מוביל בפתח תקווה.
התפקיד כולל פיתוח ותחזוקה של תהליכי אינטגרציית נתונים בסביבת Informatica בסביבת DWH ארגוני, כולל תכנון data Pipelines מורכבים ל data Lakes ולמערכות BI מקצה לקצה. עבודה עם Informatica PowerCenter ו IICS, עבודה מול מקורות מידע מגוונים כולל מערכות ארגוניות, APIs ולוגים, ושיתוף פעולה עם צוותי תשתיות, אבטחת מידע ופיתוח לצורך תחקור, שיפור ואופטימיזציה של תהליכים.
דרישות:
לפחות 4 שנות ניסיון מעשי בפיתוח ETL באמצעות Informatica PowerCenter בסביבת DWH ארגוני.
ניסיון בעבודה עם SQL ברמה גבוהה כולל אופטימיזציה ו Performance Tuning ויכולת עבודה עם מבני נתונים מורכבים.
ניסיון בעבודה עם בסיסי נתונים רלציוניים Oracle, SQL server, Teradata.
ניסיון בעבודה עם Informatica IICS.
היכרות עם סביבות ענן AWS, Azure, GCP ועם data Lakes. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8597453
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
מיקום המשרה: רמלה
למערך טכנולוגי מוביל בענף אגמים (פיקוד העורף), דרוש/ה data Engineer להשתלבות בצוות ליבה בפרויקט בעל חשיבות לאומית.
התפקיד כולל עבודה Hands-on מקצה לקצה על תשתיות נתונים - החל משלב ה-Ingestion ממקורות מגוונים, דרך פיתוח וניהול תהליכי ETL /ELT ועד תכנון והקמה של data Warehouse ( DWH ) מתקדם.
העבודה מתבצעת בסביבה טכנולוגית מתקדמת וברשתות מסווגות, תוך שימוש בכלים מודרניים והשפעה ישירה על פרויקטים משמעותיים.
משרה מלאה, ברמלה.
דרישות:
לפחות שנתיים ניסיון כ- data Engineer (גם ג'וניור עם ניסיון מעשי) - חובה.
ניסיון בעבודה עם Airflow ו-Azure data Factory - חובה.
ניסיון בתהליכי Ingestion ובניית DWH - חובה.
שליטה גבוהה ב-SQL - חובה. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8589963
סגור
שירות זה פתוח ללקוחות VIP בלבד
לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Petah Tikva
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Data Platform team, focused on building and evolving a secure, enterprise-grade Data Lake that powers large-scale global search, indexing, analytics, and AI-driven capabilities.
In this role, you will design and deliver scalable, compliant, and high-performance data pipelines that ingest, transform, and structure massive volumes of sensitive data to support mission-critical discovery and search workloads.
This position is ideal for a senior engineer who combines deep hands-on data engineering expertise with strong architectural thinking, particularly in regulated and security-sensitive environments. You will work closely with Product, Search, Backend, Security, and Data Science teams to ensure data is searchable, governed, reliable, and compliant by design.
Key Responsibilities:
Enterprise Data Lake Architecture:
Design and evolve a secure, scalable Data Lake architecture on AWS.
Define storage layout, partitioning strategies, and data organization optimized for large-scale search and analytics workloads.
Implement ACID-compliant table formats (e.g., Iceberg) to ensure reliability, consistency, and schema evolution.
Design ingestion patterns (batch and streaming) for high-volume, heterogeneous datasets.
Implement lifecycle management, retention policies, and environment isolation.
Global Search & Indexing Enablement:
Design data pipelines that prepare and structure data for global search and indexing systems.
Optimize data models and transformations to support high-performance search queries and distributed indexing.
Collaborate with search and backend teams to ensure efficient data availability and low-latency access patterns.
Support incremental ingestion, change-data-capture (CDC), and near real-time processing where required.
Ensure traceability and reproducibility of indexed datasets.
Secure & Regulated Data Engineering:
Implement strict access controls (IAM), encryption (at rest and in transit), and auditing mechanisms.
Ensure compliance with enterprise security and regulatory requirements.
Design systems with data lineage, traceability, and audit-readiness in mind.
Partner with Security and Compliance teams to support internal and external audits.
Handle sensitive and regulated datasets with strong governance and segregation controls.
Pipeline Development & Platform Engineering:
Build and maintain high-scale ETL/ELT pipelines using Apache Spark (EMR/Glue) and AWS-native services.
Leverage S3, Athena, Kinesis, Lambda, Step Functions, and EKS to support both batch and streaming workloads.
Implement Infrastructure as Code (Terraform / CDK / SAM) for reproducible environments.
Establish observability, monitoring, and SLA management for mission-critical pipelines.
Continuously optimize performance, scalability, and cost efficiency.
Cross-Functional Collaboration:
Work closely with Product Managers to translate global search and discovery requirements into scalable data solutions.
Collaborate with ML and Data Science teams to enable feature extraction and enrichment pipelines.
Contribute to architecture discussions and promote best practices in enterprise data engineering.
Provide documentation and clear technical artifacts for regulated environments.
דרישות:
Technical Expertise:
Strong hands-on experience with Apache Spark (EMR, Glue, PySpark).
Deep experience with AWS data services: S3, EMR, Glue, Athena, Lambda, Step Functions, Kinesis.
Proven experience designing and operating Data Lakes / Lakehouse architectures (Iceberg preferred).
Experience building scalable batch and streaming pipelines for large datasets.
Strong understanding of distributed systems and data modeling for search/indexing use cases.
Experience implementing secure, compliant data architectures (IAM, encryption, auditing).
Infrastructure as Code experience (Terraform / CDK / SAM).
Strong Python skills (TypeScript is a plus).
Enterprise & Search-Oriented Mindset המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600560
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Petah Tikva
Job Type: Full Time
We are seeking a Senior Backend & Data Engineer to join its SaaS Data Platform team.
This role offers a unique opportunity to design and build large-scale, high-performance data platforms and backend services that power our cloud-based products.
You will own features end to end-from architecture and design through development and production deployment-while working closely with Data Science, Machine Learning, DevOps, and Product teams.
What Youll Do:
Design, develop, and maintain scalable, secure data platforms and backend services on AWS.
Build batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS.
Develop backend components and data-processing workflows in a cloud-native environment.
Optimize performance, reliability, and observability of data pipelines and backend services.
Collaborate with ML, backend, DevOps, and product teams to deliver data-powered solutions.
Drive best practices, code quality, and technical excellence within the team.
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Tech Stack:
AWS Services: S3, Lambda, Glue, Step Functions, Kinesis, Athena, EMR, Airflow, Iceberg, EKS, SNS/SQS, EventBridge
Languages: Python (Node.js/TypeScript a plus)
Data & Processing: batch & streaming pipelines, distributed computing, serverless architectures, big data workflows
Tooling: CI/CD, GitHub, IaC (Terraform/CDK/SAM), containerized environments, Kubernetes
Observability: CloudWatch, Splunk, Grafana, Datadog
Key Responsibilities:
Design, develop, and maintain scalable, secure backend services and data platforms on AWS
Build and operate batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS
Develop backend components and data processing workflows in a cloud-native environment
Optimize performance, reliability, and observability of data pipelines and backend services
Collaborate with ML, backend, DevOps, and product teams to deliver data-driven solutions
Lead best practices in code quality, architecture, and technical excellence
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Requirements:
8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
Proven ability to design and implement scalable backend services and data pipelines
Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
Experience with Infrastructure as Code and automated deployment of data infrastructure
Strong debugging, testing, and performance-tuning skills in agile environments
High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer)
Experience with ML pipelines or AI-driven analytics
Familiarity with data governance, self-service data platforms, or data mesh architectures
Experience with PostgreSQL, DynamoDB, MongoDB
Experience building or consuming high-scale APIs
Background in multi-threaded or distributed system development
Domain experience in cybersecurity, law enforcement, or other regulated industries.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600551
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Petah Tikva
Job Type: Full Time
We are seeking a QA Engineer with a strong passion for data quality, performance, and scale to join our Data Platform team.
This role is ideal for a QA professional who enjoys working close to complex data systems, understands large-scale pipelines, and wants to play a key role in shaping the automation and quality strategy of a data engineering organization.
You will act as the primary quality owner for high-volume, mission-critical data platforms, working closely with data engineers, backend developers, and platform teams.
What Youll Do:
Data Quality & Validation:
Design and execute data validation strategies for large-scale batch and streaming pipelines
Ensure data correctness, completeness, freshness, and consistency across the data lake
Define and automate checks for schema changes, data drift, and data quality regressions
Performance & Scalability Testing:
Plan and execute performance and scalability tests for data pipelines and processing jobs
Identify bottlenecks across ingestion, transformation, and querying layers
Partner with engineers to validate performance improvements and prevent regressions
Automation & Infrastructure:
Develop and maintain the data teams QA automation infrastructure
Build reusable testing frameworks and tools tailored for large datasets and pipelines
Integrate automated tests into CI/CD pipelines and production monitoring workflows
Collaboration & Ownership:
Work closely with data engineers, backend developers, and platform engineers throughout the development lifecycle
Act as the sole QA owner within a cross-functional team, driving quality without becoming a bottleneck
Participate in design discussions to ensure testability and observability are built in from the start
Quality Mindset & Communication:
Champion a quality-first culture within the team
Clearly communicate risks, findings, and quality metrics to technical stakeholders
Balance thoroughness with pragmatism in fast-moving, high-scale environments.
Requirements:
Experience:
Proven experience as a QA Engineer, ideally within data-intensive or platform teams
Hands-on experience testing large-scale systems, pipelines, or distributed architectures
Experience working as the sole QA in a cross-functional engineering team.
Technical Skills:
Strong understanding of data pipelines and data lake concepts
Experience validating large datasets and implementing data quality checks
Familiarity with performance and load testing methodologies
Experience building test automation frameworks (Python preferred)
Understanding of CI/CD pipelines and automation best practices.
Mindset & Collaboration:
Passion for data, performance, and technology
Self-driven, independent, and comfortable owning QA end-to-end
Strong communication skills and ability to collaborate across disciplines
Curious, proactive, and eager to learn complex systems.
Nice to Have:
Experience testing big data or analytics platforms
Familiarity with cloud environments (AWS preferred)
Knowledge of Spark, SQL-based analytics, or data processing frameworks
Experience with data observability or data quality tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600532
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Merkaz
Job Type: Full Time and Hybrid work
we are looking for a talented and Senior Data Engineer to join our data group.
The team is responsible for processing and transforming data from multiple external sources while building and maintaining an internal serving platform. Our key challenges include operating at scale, integrating with diverse external interfaces, and ensuring the data is served in a consistent and reliable manner.
Responsibilities:
Designing and implementing data platform
Transforming, modeling, and serving all medical data.
Utilize data best practices, to improve the product and allow data-driven decision-making.
Collaborate closely with cross-functional teams to understand business requirements and translate them into data-driven solutions.
Stay updated with the latest research and advancements in the data field.
Requirements:
6+ years of experience in software development, with at least 4-5 years as a data engineer.
Proven track record designing and implementing scalable ETL/ELT data pipelines.
Strong SQL skills.
Experience with relational and analytical data platforms (e.g., PostgreSQL, Snowflake, data lakes).
Strong coding skills (preferably in Python), with prior software engineering experience (e.g., API development) a plus.
Experience with cloud environments, AWS preferable
Previous managerial or leadership experience is a strong advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600451
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Analytics Engineer to help design and build the engineering foundation that powers analytics across the organization.
Our goal is to create a modern data environment where analytics development is fast, reliable, scalable, and increasingly automated. This includes building strong data warehouse foundations, scalable modeling layers, and introducing AI-powered tools and automation that accelerate how data products are built and used.
In this role, you will be part of an analytics squad, working closely with analysts and business stakeholders while building the infrastructure, automation frameworks, and intelligent tooling that enable analytics to scale across the organization.
This is a unique opportunity to help build the next generation of the data organization.
Key Responsibilities
Lead AI adoption in the analytics platform, building tools and workflows that automate analytics development, dashboards, and data exploration
Design and build scalable data warehouse models and transformation layers
Build and optimize ETL pipelines and core analytics infrastructure (Bronze / Silver)
Improve performance, reliability, and scalability of the analytics platform
Develop automation and internal tools that accelerate analytics workflows
Enable self-serve data access across the company through semantic layers and reusable datasets
Collaborate with analysts and business teams within an analytics squad.
Requirements:
6+ years of experience in Data Engineering and Analytics Engineering roles, building modern data warehouses and analytics platforms using technologies such as BigQuery, dbt, and Python
Experience with workflow orchestration (Dagster, Airflow, or equivalent) and building reliable, observable data pipelines
Hands-on experience using AI coding platforms and tools to automate data engineering and analytics workflows
Strong engineering practices including version control (Git), testing, code reviews, and CI/CD
Experience building automation systems and internal tools for data teams
Experience working closely with analysts, product teams, and business stakeholders in analytics-driven environments
Strong problem-solving skills with a builder mindset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600360
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and hands-on Backend Engineer to be a key player in building high-scale Data Platforms and Products for our business teams.
This role involves working with large datasets and scalable systems, and developing internal tools to enable data-driven decision-making across the company.
Key Responsibilities:
Develop internal tools for various teams.
Build and maintain microservices and APIs to support diverse workflows.
Operate in a real-time, event-driven environment.
Create and manage data pipelines.
Take ownership of multiple systems and products.
Develop and deploy machine learning pipelines to production in an event-driven architecture.
Work in a multi-cloud environment (Azure/GCP/AWS).
Integrate third-party tools with our platform.
Translate business requirements into technical specifications.
Our Tech Stack:
Python, BigQuery, Redis, RabbitMQ, MySQL, Tornado, SQLAlchemy, Airflow, Airbyte, NewRelic, Elastic, Kubernetes (K8S).
Requirements:
Experience: Minimum 5 years as a Backend Engineer.
Proficiency in Python: At least 5 years of experience, or expertise in an equivalent programming language.
Microservices and APIs: Proven experience in writing and maintaining microservices and REST APIs.
SQL Expertise: Strong proficiency in SQL.
Event-Driven Development: Hands-on experience with event-based development.
Big Data Experience: Familiarity with big data and high-velocity/volume systems is a plus.
Cloud Environments: Experience with multi-cloud environments (Azure, GCP, AWS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600293
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Petah Tikva
Job Type: Full Time
Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using diverse infra and multi-stage pipelines with intricate data structures and advanced queries, and complex BI.

A bit about our infrastructure. Our main databases are Snowflake, Iceberg on AWS, and Trino. Spark on EMR processes the huge influx of data. Airflow does most of the ETL.

The data we deliver drives insights both for internal and external customers. Our internal customers use it routinely for decision-making across the organization, such enhancing our product offerings.

What Youll Do
Build, maintain, and optimize data infrastructure.
Contribute to the evolution of our AWS-based infrastructure.
Work with database technologies - Snowflake, Iceberg, Trino, Athena, and Glue.
Utilize Airflow, Spark, Kubernetes, ArgoCD and AWS.
Provide AI tools to ease data access for our customers.
Integrate external tools such as for anomaly detection or data sources ingestion.
Use AI to accelerate your development.
Assures the quality of the infra by employed QA automation methods.
Requirements:
5+ years of experience as a Data Engineer, or Backend Developer.
Experience with Big Data and cloud-based environments, preferably AWS.
Experience with Spark and Airflow.
Experience with Snowflake, Databrick, BigQuery or Iceberg.
Strong development experience in Python.
Knowledge of Scala for Spark is a plus.
A team player that care about the team, the service, and his customers
Strong analytical skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600292
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Ramat Gan
Job Type: Full Time
We are looking for a DataOps Engineer to own the infrastructure that powers our large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure - you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.
You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you.
RESPONSIBILITIES:
Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
Own the reliability, performance, and cost-efficiency of the data platform - including SLAs, autoscaling, resource quotas, and workload isolation
Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
Develop observability tooling - metrics, logging, alerting, and data quality dashboards - to proactively surface issues across the pipeline stack
Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
Drive platform improvements end-to-end: from design through deployment and ongoing ownership.
Requirements:
5+ years of experience in a production infrastructure, SRE, or DevOps role
Strong Kubernetes experience, autoscaling, resource management, and the broader K8s ecosystem
2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
Proficiency in at least one general-purpose language - Python or Go preferred
Experience with workflow orchestration tools, particularly Apache Airflow
Solid understanding of cloud infrastructure - GCP preferred (GCS, GKE, IAM)
Strong observability skills: metrics pipelines, structured logging, alerting frameworks
OTHER REQUIREMENTS:
Hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production
Familiarity with Delta Lake, Parquet, and columnar storage formats
Experience with data quality frameworks and pipeline lineage tooling
Knowledge of query optimization, partition strategies, and Spark performance tuning
Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599274
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Were looking for a Senior Data Architect to help shape how high-growth startups build and scale on AWS. In this role, youll design and deliver end-to-end data and analytics solutions - from architecture and pipelines to visualization and insights - guiding customers from concept through production. Youll work closely with startup founders, technical leaders, and account executives to create scalable, cost-efficient architectures that drive real business impact.
Work location - hybrid from Tel Aviv
If you are interested in this opportunity, please submit your CV in English.
Key Responsibilities
Design, develop, and implement data & analytics solutions to meet business requirements and create cost-efficient, highly available, and scalable customer solutions, including Well-Architected reviews and SoW.
Research and analyze current solutions and initiate improvement plans.
Collaborate with other engineers and stakeholders to ensure solutions are designed and developed according to best practices.
Lead workshops, POCs, and architecture reviews with startup customers, conferences, webinars, and more.
Stay up to date on Data Engineering and Analytics trends and contribute to internal enablement.
Frequent travels - locally (on-demand to meet with customers and partners and attend local events) and abroad (at least once a quarter).
Requirements:
3+ years of hands-on experience in AWS, including solution design, migration, and maintenance
2+ years in customer-facing technical roles (e.g., SRE, Cloud Architect, Customer Engineer)
Production experience with AWS infrastructure, data services, and real-time data processing
Proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, DynamoDB)
Skilled in AWS analytics tools (Glue, Athena, Redshift, EMR, Kinesis, MSK, QuickSight, dbt)
Understanding of information security best practices
Strong verbal and written communication in English and local language
Ability to lead end-to-end technical engagements and work in fast-paced environments
AWS Solutions Architect - Associate certification
Experience with Iceberg- an advantage
Experience with Kubernetes, CI/CD, and DevOps tools - an advantage
Experience with ETL processes, data lakes, and pipelines - an advantage
Experience writing SOWs, HLDs, and effort estimates - an advantage
AWS Professional or Data Analytics/Data Engineer certifications - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599151
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598652
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking a Mid to Senior Data Engineer to join our Cloud Identity & Perimeter, a critical component of security infrastructure. Our team develops and maintains complex data pipelines that process billions of records daily, analyzing identity-related security patterns, effective permissions, internet exposure, and attack paths. We're at the forefront of securing enterprise identities and delivering actionable security insights at scale.

What You'll Do:

Design and implement high-performance, distributed data processing pipelines handling petabytes of security data

Architect complex data transformations using Apache Spark for large-scale batch and stream processing

Be part of shaping new products while collaborating with product teams, customers, and sales.

Build and optimize real-time data streaming solutions using Kafka for identity analytics

Develop and maintain scalable ETL processes that handle billions of daily events

Create efficient data models for complex security analytics queries

Collaborate with cross-functional teams to deliver high-impact security features

Optimize query performance and data storage patterns for large-scale distributed systems

Participate in system design discussions and architectural decisions
Requirements:
5+ years of experience in data engineering or similar roles

Strong programming skills in Go and/or Java

Extensive experience with big data technologies (Apache Spark, Kafka)

Proven track record working with distributed databases (Cassandra, Elasticsearch)

Experience building and maintaining production-grade data pipelines

Strong understanding of data modeling and optimization techniques

Excellent problem-solving skills and attention to detail

BS/MS in Computer Science or related field, or equivalent experience
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598573
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from our office.

We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.

This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.

What Youll Do

Architecture & Strategy

Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.

Engineering & Delivery

Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.

Leadership & Collaboration

Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
What Were Looking For:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams.

Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598137
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Ready to lead the way in building our next-gen data platforms? Join us and shape the future of secure connectivity!
We are looking for a Data Engineering Team Leader with deep expertise in building and managing data pipelines and streaming architecture.
Job Id: 24787
This role is ideal for an experienced and proactive leader with strong technical skills in distributed systems and data platforms. You will drive the architecture, design, and development of scalable data ingestion and processing solutions. This is an exciting opportunity to join a growing product in an enterprise environment with significant impact and room for professional growth.
This job is located in Tel Aviv (hybrid).
About Us:
Were creating the industrys leading SASE platform, merging advanced security with seamless connectivity. Our mission is to empower businesses to thrive in a cloud-first world, and data is at the heart of this transformation.
Key Responsibilities:
Inspire and mentor a top-tier data engineering team to deliver mission-critical solutions
Architect and optimize data ingestion, enrichment, and storage for massive scale and reliability
Collaborate with cross-functional teams to ensure seamless integration and data availability
Define best practices and enforce engineering excellence across the data domain.
Requirements:
4+ years of hands-on experience in data engineering, with strong knowledge of streaming technologies (Kafka/MSK, Flink) and distributed systems on AWS
2+ years of leadership experience in data engineering or related fields.
Strong development skills in Java and deep understanding of data modeling, ETL, and real-time analytics
Experience in developing and maintain a multi-tenant SaaS solution on top of AWS
Experience with React - advantage
A natural leader with strong communication skills and a can-do, hands-on approach.
BSc in computer science/software engineering (or equivalent).
Fluent English (written & spoken).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597474
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו