רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
להשיב נכון: "ספר לי על עצמך"
שימו בכיס וצאו לראיון: התשובה המושלמת לשאלה שמצ...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/04/2026
Location: Herzliya
Job Type: Full Time
we are expanding, seeking an experienced Data Engineer for challenging work in the world of Big Data.
Design, develop & maintain the BIG DATA platform (SingleStore/Vertica).
Design, develop & maintain the BIG DATA infrastructure (development, monitoring, installation, integration, etc)
Be part of the dynamic and agile core DBA team.
Collaborate with field engineers for the deployment phases.
Work on major, challenging projects with enterprise and startup customers.
Job location - Herzliya, Israel.
Requirements:
Over 3 years' experience as a Data Engineer.
Experience with databases.
Undergraduate degree in computer science / engineering or candidates with army service in software development / system administration - a significant advantage.
Patient and service-oriented, can fit well into a team, good self-learning skills and lateral, high-pressure work.
Experience in working with customers.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602152
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Data Infra Engineer. You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.

Responsibilities

Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 5+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8601803
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Netanya
Job Type: Full Time and Hybrid work
Required Senior Data Engineer - Core Data Platform Team
Our Core Data Platform team
We're a leading force in the ad tech industry, revolutionizing how brands connect with their audiences. Our platform processes billions of ad impressions daily, generating massive datasets that drive our core business. We thrive on innovation and seek a Senior Data Engineer to help us build and scale the data infrastructure that powers our insights and analytics. This is a unique opportunity to work with cutting-edge technologies and make a direct impact on our products.
What will you do?
As a Senior Data Engineer, you'll be a key part of our data platform team, responsible for designing, building, and maintaining robust and scalable data pipelines. You'll work closely with data scientists, analysts, and server side engineers to ensure our data is reliable, accessible, and ready for analysis. Your expertise will be crucial in expanding our data warehouse and data lake capabilities, enabling us to deliver next-generation ad tech solutions.
Your mission will be to:
Develop and Optimize Data Pipelines: Design, build, and maintain ETL/ELT pipelines using Apache Spark to ingest, process, and transform large-scale datasets from various sources.
Manage Cloud Infrastructure: Architect and manage our data infrastructure primarily on Google Cloud Platform (GCP) or Amazon Web Services (AWS). This includes services like BigQuery, S3, GCS, EMR, and AirFlow.
Enhance Data Storage: Improve and manage our data warehouse and data lake solutions, ensuring data quality, consistency, and accessibility for business intelligence and machine learning applications.
Collaborate and Innovate: Partner with cross-functional teams to understand data needs and implement solutions that support new product features and business initiatives.
Ensure Data Integrity: Implement monitoring, alerting, and logging systems to maintain data pipeline health and ensure data accuracy.
Requirements:
7+ years of professional experience in a data engineering or similar role.
Good programming abilities. Testing your code is second nature to you. You are mindful of your applications architecture, performance, maintainability, and overall quality.
Technical skills
Strong proficiency in SQL / Java or Scala / Python.
Extensive experience with distributed data processing frameworks like Apache Spark / Flink / Hive / Trino.
Proven experience working with cloud-based data services on GCP or AWS (e.g.BigQuery, S3, GCS, EMR, DataProc).
Experience with real-time data streaming technologies like Kafka or Flink.
Deep understanding of data warehouse and data lake concepts and best practices
Knowledge of Apache Iceberg or Delta Lake
Solid understanding of IaC using Terraform
Familiarity with SQL and NoSQL databases.
Good communication skills and ability to work collaboratively within a team. You are an active listener and a dialogue facilitator, you know how to explain your decision and like sharing your knowledge.
Multiple shipped projects in Software Engineering.
Production knowledge and practices (Release, Observability, Troubleshooting, ), thanks to multiple shipped projects / applications. Strong problem-solving skills.
Nice to Have
Familiarity with containerization (Docker/OrbStack, Kubernetes).
Knowledge of the ad tech ecosystem (e.g., DSPs, SSPs, Ad Exchanges).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8601666
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Netanya
Job Type: Full Time
Required Data Engineer - Finance Solutions
The Finance Solutions team is a growing team that builds and owns products supporting the company's financial processes for internal users, customers (advertisers), and partners (publishers). Our core mission is to make finance operations simpler, more efficient, accurate, and trustworthy by continuously identifying and resolving pain points through automation, system integrations, and the use of AI. Our team operates in a truly international and heterogeneous environment, with members and partners split between Israel, France, UK and NY.
What will you do?
As a Data Engineer, you will be the backbone of our data-driven financial operations. You will focus on building and maintaining the infrastructure that turns raw financial data into actionable insights. You will serve as a technical liaison between engineering and business stakeholders to translate analytics requirements into scalable data solutions.
Collaborate with colleagues (engineering, product managers and finance) to understand data requirements and deliver data solutions.
Design, build, and maintain data models that are scalable, extendable and easy to use by other teams.
Design, build, and maintain scalable and reliable data pipelines to collect, process, and store data from various data sources.
Implement data quality checks and monitoring to ensure data accuracy and integrity.
Create and maintain dashboards and reports that provide visibility into key financial metrics like revenue, usage, and cost.
Requirements:
What will you bring to the team?
We are looking for a seasoned professional with a "growth mindset" who is eager to learn and share knowledge. The ideal candidate should have the following qualifications:
Proven experience (4+ years) in data engineering, BI, or analytics engineering, ideally within finance or customer management domains.
Strong expertise in SQL.
Strong experience using Google data stack including Google BigQuery and Looker
Experience using tools like dbt and orchestration platforms like Apache Airflow.
Good knowledge of Python for carrying out data related tasks
Proficiency in building dashboards and reports.
Experience managing client-facing projects and troubleshooting technical issues.
Bonus Points
Experience in the Ad-Tech industry.
Experience in Finance/Accounting operations.
Experience in writing software in Java and/or Scala.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8601560
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Merkaz
Job Type: Full Time
Were hiring a hands-on Senior Data Engineer who wants to build data products that move the needle in the physical world. Your work will help construction professionals make better, data-backed decisions every day. Youll be part of a high-performing engineering team based in Tel Aviv.
Responsibilities:
Lead the design, development, and ownership of scalable data pipelines (ETL/ELT) that power analytics, product features, and downstream consumption.
Collaborate closely with Product, Data Science, Data Analytics, and full-stack/platform teams to deliver data solutions that serve product and business needs.
Build and optimize data workflows using Databricks, Spark (PySpark, SQL), Kafka, and AWS-based tooling.
Implement and manage data architectures that support both real-time and batch processing, including streaming, storage, and processing layers.
Develop, integrate, and maintain data connectors and ingestion pipelines from multiple sources.
Manage the deployment, scaling, and performance of data infrastructure and clusters, including Spark on Kubernetes, Kafka, and AWS services.
Manage the deployment, scaling, and performance of data infrastructure and clusters, including Databricks, Kafka, and AWS services.
Use Terraform (and similar tools) to manage infrastructure-as-code for data platforms.
Model and prepare data for analytics, BI, and product-facing use cases, ensuring high performance and reliability.
Requirements:
8+ years of hands-on experience working with large-scale data systems in production environments.
Proven experience designing, deploying, and integrating big data frameworks - PySpark, Kafka, Databricks.
Strong expertise in Python and SQL, with experience building and optimizing batch and streaming data pipelines.
Experience with AWS cloud services and Linux-based environments.
Background in building ETL/ELT pipelines and orchestrating workflows end-to-end.
Proven experience designing, deploying, and operating data infrastructure / data platforms.
Mandatory hands-on experience with Apache Spark in production environments.
Mandatory experience running Spark on Kubernetes.
Mandatory hands-on experience with Apache Kafka, including Kafka connectors.
Understanding of event-driven and domain-driven design principles in modern data architectures.
Familiarity with infrastructure-as-code tools (e.g., Terraform) - advantage.
Experience supporting machine learning or algorithmic applications - advantage.
BSc or higher in Computer Science, Engineering, Mathematics, or another quantitative field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600986
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Data Engineer to join our R&D organization as part of a backend-oriented team, responsible for building and scaling the core data infrastructure.
In this role, you will design and develop data pipelines that stream and process data directly from production systems. You will play a key role in shaping our data platform, building robust, scalable infrastructure and pipelines using modern technologies, and working hands-on with both new components and existing systems.
Responsibilities
Collaborate as a strong team player within a dynamic, cross-functional environment
Design, develop, and maintain scalable data models, Lakehouse architectures, pipelines, and ETL processes
Enhance data workflows to support efficient real-time and batch processing
Work closely with cross-functional teams to understand data requirements and deliver impactful solutions
Stay up to date with the latest data engineering technologies and best practices, continuously improving our data platform.
Requirements:
6+ years of development experience, including at least 3 years as a Data Engineer
Experience with distributed computing frameworks (e.g., Spark, Flink, EMR)- Must
Experience with Iceberg / Delta Lake / Databricks or similar technologies
Experience designing scalable data storage solutions over object storage (structured and semi-structured data)
Hands-on experience building data pipelines and ingestion systems (batch and/or streaming)
Strong communication skills and ability to work with multiple stakeholders across teams
Proficiency in Python and PySpark- Advantage
Experience in streaming systems and real-time data processing- Advantage
Background in backend engineering or experience working closely with backend teams- Advantage
Experience optimizing data processing performance for cost and efficiency- Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600850
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an ML Engineer / MLOps Tech Lead to promote machine learning engineering excellence. Someone who is passionate about building scalable, high-quality data products and processes, while ensuring production systems maintain strong real-time performance observability.
You will focus on designing and maintaining the core infrastructure that empowers the Machine Learning Engineers working within Data Science product teams. Youll collaborate closely with stakeholders across data science, product, and engineering, playing a pivotal role in driving the business by architecting and enabling the infrastructure for machine learning model development, serving, and lifecycle management-the foundation of our product.
Responsibilities:
Collaborate with product, data science, and engineering teams to solve complex problems, identify trends, and create opportunities through robust ML infrastructure.
End-to-end ML delivery - enabling model performance development, training, validation, testing, and version control.
Build and support monitoring and observability tools - dashboards, alerts, and performance tracking of models in production.
Lead architecture projects such as: Feature Store, Vector / Graph Databases.
Data wrangling - supporting and enabling data requirements for research, training, validation, and testing.
Drive engineering best practices including code and model versioning, CI/CD pipelines, rollout strategies, and disaster recovery procedures.
Requirements:
3+ years of experience as an ML Engineer / MLOps
5+ years of experience as a software engineer or data engineer
2+ years of experience in a technical leadership role (leading engineers or data scientists)
Strong programming skills in Python and SQL
Hands-on experience with MPP frameworks such as Spark, Flink, Ray, Dask or equivalent
Strong analytical and critical thinking skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600846
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, Israels first fully digital bank, were building the technology that powers a new era of intuitive, transparent, people-first banking. As our platform grows and our services expand, were looking for a team leader to lead a data engineering team.
The team is responsible for ingestion, processing, and serving bronze level data to other analytic teams in the organization.
In this role, you will define the data architecture of our unified Data Lakehouse, own all ETL and streaming operations, define governance processes and implement tools, make sure our data is fresh, accurate and whole always.
Your Day-to-Day
Menor and lead a team of 4-6 data engineers.
Own sensitive data operations, including monitoring, on-call and production operations.
Build and manage all ETL and streaming operations related to the Data Lakehouse.
Develop, maintain, and optimize robust data pipelines and integrations across multiple systems.
Build a platform for other analytic teams to build data products on top of the Data Lakehouse.
Define and implement quality and governance processes to make sure data is fresh, accurate and whole.
Collaborate with engineering, BI, and business teams to translate requirements into scalable data solutions
Work hands-on with data orchestration, transformation, and cloud infrastructure (OCI, Snowflake, AWS)
Support implementation of best practices in data management and observability.
Requirements:
4+ year of direct management of team of 4-6 data engineers.
8+ years in data engineering, data architecture, or similar roles.
Experience in financial systems or fintech (big advantage)
Deep hands-on experience with PostgreSQL, Snowflake, Oracle etc
Strong experience with ETL/ELT, data integration, Kafka, and other streaming solutions (must).
Proven SQL and Python skills (must).
Experience with cloud environments
Strong ownership, problem-solving ability, and communication skills
Comfort working in a fast-paced, multi-system environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600794
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Netanya
Job Type: Full Time
We are seeking a Data Engineer to join our growing data team to design, build, and maintain data pipelines and infrastructure. You will work on distributed systems, orchestration frameworks, cloud environments, and modern data technologies to ensure that data flows reliably and efficiently across the organization.
Key Responsibilities:
Optimize processes for scalability and efficiency in distributed environments.
Ensure data quality, integrity, and performance across workflows.
Work with cloud-native solutions and containerized environments (Docker/Kubernetes).
Implement and manage orchestration frameworks (Airflow/Dagster/etc.).
Collaborate with cross-functional teams to support business and research needs.
Develop and enforce CI/CD processes and data testing frameworks (pytest, GE, or similar).
Monitor, troubleshoot, and continuously improve pipeline reliability and performance.
Requirements:
Youll be a great fit if you have
4+ years of professional experience with Python and SQL.
Proven experience building and maintaining ETL/ELT pipelines (batch/streaming) with orchestration frameworks (Airflow, Dagster, or similar).
Hands-on experience with distributed computing frameworks (Spark, Dask, Beam) and large-scale data processing.
Experience with major cloud platforms (AWS, GCP, or Azure); GCP/BigQuery is an advantage.
Proficiency with Docker/Kubernetes and CI/CD pipelines (GitLab CI, GitHub Actions, or similar).
Solid understanding of software engineering practices (data structures, algorithms, TDD, code quality).
Familiarity with data testing and monitoring frameworks (pytest, Great Expectations, observability tools).
Nice to Have:
Experience with dbt, ClickHouse, or Ray.
Familiarity with Python data libraries (pandas, NumPy, Apache Arrow, Jinja).
Background in functional programming.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600774
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for aData Scientist to design and implement the decision-making logic of our Price Optimizer. You will focus on translating complex business rules into mathematical models and production-ready Python code, supported by a dedicated Data Engineering team that handles the underlying infrastructure.
Your primary mission is to build the "brain" of our system. You will work at the intersection of Data Science and Product, ensuring our simulation and optimization engines accurately reflect real-world pricing strategies and market dynamics. While you will write production code, you will rely on our Data Engineers for ETL pipelines orchestration and distributed compute scaling.
Responsibilities:
Implement Business Logic: Translate intricate pricing rules and commercial strategies into robust Python code and mathematical constraints.
Refine Optimization Models: Develop and tune the simulation and revenue management algorithms that drive our pricing recommendations.
Write Clean Code: Contribute high-quality, tested, and maintainable code to the core logic repositories.
Analyze & Improve: Use data to validate model behavior and identify edge cases where business rules clash with algorithmic outputs.
Collaborate: Partner with Solution Architects to define logic requirements and with Data Engineers to integrate your models into the Dagster pipelines.
Requirements:
You'll be a great fit if you have.
3+ years of experience in Data Science or Algorithmic Development with Python.
Proven ability to translate complex business requirements into code and logical rules.
Strong background in Mathematical Optimization, Simulation, or Logic Programming.
Fluency in the PyData stack (Pandas, NumPy, SciPy).
Experience writing production-quality code (not just notebooks) - you understand modular design and unit testing.
Nice to Have:
Experience in Revenue Management, Air Travel, or Logistics domains.
Familiarity with orchestration frameworks like Dagster or Airflow (from a user/logic perspective).
Understanding of Derivative-Free Optimization.
SQL, Clickhouse.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600772
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Petah Tikva
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Data Platform team, focused on building and evolving a secure, enterprise-grade Data Lake that powers large-scale global search, indexing, analytics, and AI-driven capabilities.
In this role, you will design and deliver scalable, compliant, and high-performance data pipelines that ingest, transform, and structure massive volumes of sensitive data to support mission-critical discovery and search workloads.
This position is ideal for a senior engineer who combines deep hands-on data engineering expertise with strong architectural thinking, particularly in regulated and security-sensitive environments. You will work closely with Product, Search, Backend, Security, and Data Science teams to ensure data is searchable, governed, reliable, and compliant by design.
Key Responsibilities:
Enterprise Data Lake Architecture:
Design and evolve a secure, scalable Data Lake architecture on AWS.
Define storage layout, partitioning strategies, and data organization optimized for large-scale search and analytics workloads.
Implement ACID-compliant table formats (e.g., Iceberg) to ensure reliability, consistency, and schema evolution.
Design ingestion patterns (batch and streaming) for high-volume, heterogeneous datasets.
Implement lifecycle management, retention policies, and environment isolation.
Global Search & Indexing Enablement:
Design data pipelines that prepare and structure data for global search and indexing systems.
Optimize data models and transformations to support high-performance search queries and distributed indexing.
Collaborate with search and backend teams to ensure efficient data availability and low-latency access patterns.
Support incremental ingestion, change-data-capture (CDC), and near real-time processing where required.
Ensure traceability and reproducibility of indexed datasets.
Secure & Regulated Data Engineering:
Implement strict access controls (IAM), encryption (at rest and in transit), and auditing mechanisms.
Ensure compliance with enterprise security and regulatory requirements.
Design systems with data lineage, traceability, and audit-readiness in mind.
Partner with Security and Compliance teams to support internal and external audits.
Handle sensitive and regulated datasets with strong governance and segregation controls.
Pipeline Development & Platform Engineering:
Build and maintain high-scale ETL/ELT pipelines using Apache Spark (EMR/Glue) and AWS-native services.
Leverage S3, Athena, Kinesis, Lambda, Step Functions, and EKS to support both batch and streaming workloads.
Implement Infrastructure as Code (Terraform / CDK / SAM) for reproducible environments.
Establish observability, monitoring, and SLA management for mission-critical pipelines.
Continuously optimize performance, scalability, and cost efficiency.
Cross-Functional Collaboration:
Work closely with Product Managers to translate global search and discovery requirements into scalable data solutions.
Collaborate with ML and Data Science teams to enable feature extraction and enrichment pipelines.
Contribute to architecture discussions and promote best practices in enterprise data engineering.
Provide documentation and clear technical artifacts for regulated environments.
דרישות:
Technical Expertise:
Strong hands-on experience with Apache Spark (EMR, Glue, PySpark).
Deep experience with AWS data services: S3, EMR, Glue, Athena, Lambda, Step Functions, Kinesis.
Proven experience designing and operating Data Lakes / Lakehouse architectures (Iceberg preferred).
Experience building scalable batch and streaming pipelines for large datasets.
Strong understanding of distributed systems and data modeling for search/indexing use cases.
Experience implementing secure, compliant data architectures (IAM, encryption, auditing).
Infrastructure as Code experience (Terraform / CDK / SAM).
Strong Python skills (TypeScript is a plus).
Enterprise & Search-Oriented Mindset המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600560
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Petah Tikva
Job Type: Full Time
We are seeking a Senior Backend & Data Engineer to join its SaaS Data Platform team.
This role offers a unique opportunity to design and build large-scale, high-performance data platforms and backend services that power our cloud-based products.
You will own features end to end-from architecture and design through development and production deployment-while working closely with Data Science, Machine Learning, DevOps, and Product teams.
What Youll Do:
Design, develop, and maintain scalable, secure data platforms and backend services on AWS.
Build batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS.
Develop backend components and data-processing workflows in a cloud-native environment.
Optimize performance, reliability, and observability of data pipelines and backend services.
Collaborate with ML, backend, DevOps, and product teams to deliver data-powered solutions.
Drive best practices, code quality, and technical excellence within the team.
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Tech Stack:
AWS Services: S3, Lambda, Glue, Step Functions, Kinesis, Athena, EMR, Airflow, Iceberg, EKS, SNS/SQS, EventBridge
Languages: Python (Node.js/TypeScript a plus)
Data & Processing: batch & streaming pipelines, distributed computing, serverless architectures, big data workflows
Tooling: CI/CD, GitHub, IaC (Terraform/CDK/SAM), containerized environments, Kubernetes
Observability: CloudWatch, Splunk, Grafana, Datadog
Key Responsibilities:
Design, develop, and maintain scalable, secure backend services and data platforms on AWS
Build and operate batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS
Develop backend components and data processing workflows in a cloud-native environment
Optimize performance, reliability, and observability of data pipelines and backend services
Collaborate with ML, backend, DevOps, and product teams to deliver data-driven solutions
Lead best practices in code quality, architecture, and technical excellence
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Requirements:
8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
Proven ability to design and implement scalable backend services and data pipelines
Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
Experience with Infrastructure as Code and automated deployment of data infrastructure
Strong debugging, testing, and performance-tuning skills in agile environments
High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer)
Experience with ML pipelines or AI-driven analytics
Familiarity with data governance, self-service data platforms, or data mesh architectures
Experience with PostgreSQL, DynamoDB, MongoDB
Experience building or consuming high-scale APIs
Background in multi-threaded or distributed system development
Domain experience in cybersecurity, law enforcement, or other regulated industries.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600551
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Petah Tikva
Job Type: Full Time
We are seeking a QA Engineer with a strong passion for data quality, performance, and scale to join our Data Platform team.
This role is ideal for a QA professional who enjoys working close to complex data systems, understands large-scale pipelines, and wants to play a key role in shaping the automation and quality strategy of a data engineering organization.
You will act as the primary quality owner for high-volume, mission-critical data platforms, working closely with data engineers, backend developers, and platform teams.
What Youll Do:
Data Quality & Validation:
Design and execute data validation strategies for large-scale batch and streaming pipelines
Ensure data correctness, completeness, freshness, and consistency across the data lake
Define and automate checks for schema changes, data drift, and data quality regressions
Performance & Scalability Testing:
Plan and execute performance and scalability tests for data pipelines and processing jobs
Identify bottlenecks across ingestion, transformation, and querying layers
Partner with engineers to validate performance improvements and prevent regressions
Automation & Infrastructure:
Develop and maintain the data teams QA automation infrastructure
Build reusable testing frameworks and tools tailored for large datasets and pipelines
Integrate automated tests into CI/CD pipelines and production monitoring workflows
Collaboration & Ownership:
Work closely with data engineers, backend developers, and platform engineers throughout the development lifecycle
Act as the sole QA owner within a cross-functional team, driving quality without becoming a bottleneck
Participate in design discussions to ensure testability and observability are built in from the start
Quality Mindset & Communication:
Champion a quality-first culture within the team
Clearly communicate risks, findings, and quality metrics to technical stakeholders
Balance thoroughness with pragmatism in fast-moving, high-scale environments.
Requirements:
Experience:
Proven experience as a QA Engineer, ideally within data-intensive or platform teams
Hands-on experience testing large-scale systems, pipelines, or distributed architectures
Experience working as the sole QA in a cross-functional engineering team.
Technical Skills:
Strong understanding of data pipelines and data lake concepts
Experience validating large datasets and implementing data quality checks
Familiarity with performance and load testing methodologies
Experience building test automation frameworks (Python preferred)
Understanding of CI/CD pipelines and automation best practices.
Mindset & Collaboration:
Passion for data, performance, and technology
Self-driven, independent, and comfortable owning QA end-to-end
Strong communication skills and ability to collaborate across disciplines
Curious, proactive, and eager to learn complex systems.
Nice to Have:
Experience testing big data or analytics platforms
Familiarity with cloud environments (AWS preferred)
Knowledge of Spark, SQL-based analytics, or data processing frameworks
Experience with data observability or data quality tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600532
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Merkaz
Job Type: Full Time and Hybrid work
we are looking for a talented and Senior Data Engineer to join our data group.
The team is responsible for processing and transforming data from multiple external sources while building and maintaining an internal serving platform. Our key challenges include operating at scale, integrating with diverse external interfaces, and ensuring the data is served in a consistent and reliable manner.
Responsibilities:
Designing and implementing data platform
Transforming, modeling, and serving all medical data.
Utilize data best practices, to improve the product and allow data-driven decision-making.
Collaborate closely with cross-functional teams to understand business requirements and translate them into data-driven solutions.
Stay updated with the latest research and advancements in the data field.
Requirements:
6+ years of experience in software development, with at least 4-5 years as a data engineer.
Proven track record designing and implementing scalable ETL/ELT data pipelines.
Strong SQL skills.
Experience with relational and analytical data platforms (e.g., PostgreSQL, Snowflake, data lakes).
Strong coding skills (preferably in Python), with prior software engineering experience (e.g., API development) a plus.
Experience with cloud environments, AWS preferable
Previous managerial or leadership experience is a strong advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600451
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Analytics Engineer to help design and build the engineering foundation that powers analytics across the organization.
Our goal is to create a modern data environment where analytics development is fast, reliable, scalable, and increasingly automated. This includes building strong data warehouse foundations, scalable modeling layers, and introducing AI-powered tools and automation that accelerate how data products are built and used.
In this role, you will be part of an analytics squad, working closely with analysts and business stakeholders while building the infrastructure, automation frameworks, and intelligent tooling that enable analytics to scale across the organization.
This is a unique opportunity to help build the next generation of the data organization.
Key Responsibilities
Lead AI adoption in the analytics platform, building tools and workflows that automate analytics development, dashboards, and data exploration
Design and build scalable data warehouse models and transformation layers
Build and optimize ETL pipelines and core analytics infrastructure (Bronze / Silver)
Improve performance, reliability, and scalability of the analytics platform
Develop automation and internal tools that accelerate analytics workflows
Enable self-serve data access across the company through semantic layers and reusable datasets
Collaborate with analysts and business teams within an analytics squad.
Requirements:
6+ years of experience in Data Engineering and Analytics Engineering roles, building modern data warehouses and analytics platforms using technologies such as BigQuery, dbt, and Python
Experience with workflow orchestration (Dagster, Airflow, or equivalent) and building reliable, observable data pipelines
Hands-on experience using AI coding platforms and tools to automate data engineering and analytics workflows
Strong engineering practices including version control (Git), testing, code reviews, and CI/CD
Experience building automation systems and internal tools for data teams
Experience working closely with analysts, product teams, and business stakeholders in analytics-driven environments
Strong problem-solving skills with a builder mindset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600360
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו