רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our team and help shape a modern, scalable data platform. Youll work with cutting-edge AWS technologies, Spark, and Iceberg to build pipelines that keep our data reliable, discoverable, and ready for analytics.

Whats the Job?
Design and maintain scalable data pipelines on AWS (EMR, S3, Glue, Iceberg).
Transform raw, semi-structured data into analytics-ready datasets using Spark.
Automate schema management, validation, and quality checks.
Optimize performance and costs with smart partitioning, tuning, and monitoring.
Research and evaluate new technologies, proposing solutions that improve scalability and efficiency.
Plan and execute complex data projects with foresight and attention to long-term maintainability.
Collaborate with engineers, analysts, and stakeholders to deliver trusted data for reporting and dashboards.
Contribute to CI/CD practices, testing, and automation.
Requirements:
Requirements:
Strong coding skills in Python (PySpark, pandas, boto3).
Experience with big data frameworks (Spark) and schema evolution.
Knowledge of lakehouse technologies (especially Apache Iceberg).
Familiarity with AWS services: EMR, S3, Glue, Athena.
Experience with orchestration tools like Airflow.
Solid understanding of CI/CD and version control (GitHub Actions).
Ability to research, evaluate, and plan ahead for new solutions and complex projects.

Nice to have:
Experience with MongoDB or other NoSQL databases.
Experience with stream processing (e.g., Kafka, Kinesis, Spark Structured Streaming).
Ability to create visualized dashboards and work with Looker (Enterprise).
Infrastructure-as-code (Terraform).
Strong debugging and troubleshooting skills for distributed systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471922
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
The Performance Marketing Analytics team is seeking a highly skilled Senior Data platform Engineer to establish, operate, and maintain our dedicated Performance Marketing Data Mart within the Snowflake Cloud Data Platform. This is a critical, high-autonomy role responsible for the end-to-end data lifecycle, ensuring data quality, operational excellence, and governance within the new environment. This role will directly enable the Performance Marketing team's vision for data-driven marketing and increased ownership of our analytical infrastructure
Responsibilities
Snowflake Environment Management
Administer the Snowflake account (roles, permissions, cost monitoring, performance tuning).
Implement best practices for security, PII handling, and data governance.
Act as the subject matter expert for Snowflake within the team.
DevOps & Model Engineering
Establish and manage the development and production environments.
Maintain CI/CD pipeline using GitLab to automate the build, test, and deployment process.
Implement normal engineering practices such as code testing and commit reviews to prevent tech debt.
Data Operations & Reliability
Monitor pipeline executions to ensure timely, accurate, and reliable data.
Set up alerting, incident management, and SLAs for marketing data operations.
Troubleshoot and resolve platform incidents quickly to minimize business disruption.
Tooling & Integration
Support the integration of BI, monitoring, and orchestration tools
Evaluate and implement observability and logging solutions for platform reliability.
Governance & Compliance
Ensure alignment with Entain data governance and compliance policies.
Document operational procedures, platform configurations, and security controls.
Act as the team liaison with procurement, infrastructure, and security teams for platform-related topics.
Collaboration & Enablement
Work closely with BI, analysts and data engineers, ensuring the platform supports their evolving needs.
Provide guidance on best practices for query optimization, cost efficiency, and secure data access.
Requirements:
At least 4 years of experience in data engineering, DevOps, or data platform operations roles.
Expert Proficiency in Snowflake: 2+ years of Deep, hands-on experience with Snowflake setup, administration, security, warehouse management, performance tuning and cost management.
Programming: Expertise in SQL, and proficiency in Python for data transformation and operational scripting
Experience implementing CI/CD pipelines (preferably GitLab) for data/analytics workloads
Hands-on experience with modern data environments (cloud warehouses, dbt, orchestration tools)
Ability to work effectively in a fast-paced and dynamic environment
Bachelor's degree in a relevant field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471910
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an exceptional Big Data Engineer to join our R&D team. This is a unique opportunity for a recent graduate or early-career engineer to enter the world of Big Data. You will work alongside the best engineers and scientists in the industry to develop systems that process and analyze data from around the digital world.
So what will you be doing all day?
Learn and Build: Assist in the design and implementation of high-scale systems using a variety of technologies, mentored by senior engineers.
Collaborate: Work in a data research team alongside data engineers, data scientists, and data analysts to tackle data challenges.
Optimize: Help improve the existing infrastructure of code and data pipelines and learn how to identify and eliminate bottlenecks.
Innovate: Experiment with various technologies in the domain of Machine Learning and big data processing.
Monitor: Assist in maintaining monitoring infrastructure to ensure smooth data ingestion and calculation.
Requirements:
Education: BSc degree in Computer Science, Engineering, or a related technical field (Recent graduates are welcome).
Programming: Proficiency in one or more of the following languages: Python, Java, or Scala.
Fundamentals: Strong grasp of Computer Science fundamentals, including Data Structures, Design Patterns, and Object-Oriented Programming.
Soft Skills: Excellent communication skills with the ability to engage in dialogue within data teams.
Mindset: Passionate about data , holds a strong sense of ownership , and comfortable in a fast-paced environment.
Advantages (Not required, but great to have)
Previous internship or work experience in software or data engineering.
Basic understanding or familiarity with CI/CD practices and Git.
Familiarity with containerization technologies like Docker and Kubernetes.
Familiarity with Cloud providers (AWS / Azure / GCP) or Big Data frameworks (Spark, Airflow, Kafka).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471696
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Big Data Engineer to develop and integrate systems that retrieve, process and analyze data from around the digital world, generating customer-facing data. This role will report to our Team Manager, R&D.
Why is this role so important?
We are a data-focused company, and data is the heart of our business.
As a big data engineer developer, you will work at the very core of the company, designing and implementing complex high scale systems to retrieve and analyze data from millions of digital users.
Your role as a big data engineer will give you the opportunity to use the most cutting-edge technologies and best practices to solve complex technical problems while demonstrating technical leadership.
So, what will you be doing all day?
Your role as part of the R&D team means your daily responsibilities may include:
Design and implement complex high scale systems using a large variety of technologies.
You will work in a data research team alongside other data engineers, data scientists and data analysts. Together you will tackle complex data challenges and bring new solutions and algorithms to production.
Contribute and improve the existing infrastructure of code and data pipelines, constantly exploring new technologies and eliminating bottlenecks.
You will experiment with various technologies in the domain of Machine Learning and big data processing.
You will work on a monitoring infrastructure for our data pipelines to ensure smooth and reliable data ingestion and calculation.
Requirements:
Passionate about data.
Holds a BSc degree in Computer Science\Engineering or a related technical field of study.
Has at least 4 years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471314
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an Analytics Engineer to join our team and play a key role in shaping how data drives our product and business decisions.
This role is perfect for someone who enjoys working at the intersection of data, product, and strategy. While Product Analysts focus on turning data into insights, youll focus on building the strong data foundations that make those insights possible. You wont just run queries; you will design the data architecture and own the "source of truth" tables that power our strategic decision-making.
Youll work closely with our Growth and Solutions teams, helping them move faster and smarter by making sure the data behind Generative AI, Data-as-a-Service (DaaS), and advanced product models is clear, reliable, and easy to use. Your work will have a direct impact on how we build, scale, and innovate our products.
What Youll Do
Define the Source of Truth: Take raw, complex data and transform it into clean, well-structured tables that Product Analysts and Business Leads can use for high-stakes decision-making.
Translate Strategy into Logic: Work with Product, Growth, and Solutions leads to turn abstract concepts (like "Activation," "Retention," or "Feature Adoption") into precise SQL definitions and automated datasets.
Enable High-Tech Initiatives: Partner with our AI and DaaS specialists to ensure they have the structured data foundations they need to build models and external data products.
Optimize for Usability: Ensure our data is not just "there," but easy to use. You will design the data logic that powers our most important product dashboards and growth funnels.
Maintain Data Integrity: Act as the guardian of our metrics. You will ensure that the numbers used across our product and business reports are consistent, reliable, and logical.
Requirements:
Expert SQL Mastery: You are a SQL power-user. You enjoy solving complex logic puzzles using code and care deeply about query efficiency and data accuracy.
The "Bridge" Mindset: You can sit in a meeting with a Product Manager to understand a business need, and then translate that into a technical data structure that serves that need.
Logical Architecture: You have a natural talent for organizing information. You know how to build a table that is intuitive and easy for other analysts to query.
Product & Business Acumen: You understand SaaS metrics (ARR, funnels, activation, etc.) and how data logic impacts product growth and strategy.
Experience with Analytics Tools: Proficiency in BI tools (Looker, Tableau, etc.) and a strong understanding of how data flows from technical logs to the end-user interface.
Degree: B.Sc. in Industrial Engineering, Information Systems, Economics, Computer Science or a related quantitative field.
Experience: 1+ years of prior experience in a relevant analytics/technical role.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471303
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Platform Engineer to design, build, and scale next-generation data platform, the backbone powering our AI-driven insights.
This role sits at the intersection of data engineering, infrastructure, and MLOps, owning the architecture and reliability of our data ecosystem end-to-end.
Youll work closely with data scientists,r&d teams, analysts to create a robust platform that supports varying use cases, complex ingestion, and AI-powered analytics.
Responsibilities:
Architect and evolve a scalable, cloud-native data platform that supports batch, streaming, analytics, and AI/LLM workloads across R&D.
Help define and implement standards for how data is modeled, stored, governed, and accessed
Design and build data lakes and data warehouses
Develop and maintain complex, reliable, and observable data pipelines
Implement data quality, validation, and monitoring frameworks
Collaborate with ML and data science teams to connect AI/LLM workloads to production data pipelines, enabling RAG, embeddings, and feature engineering flows.
Manage and optimize relational and non-relational datastores (Postgres, Elasticsearch, vector DBs, graph DBs).
Build internal tools and self-service capabilities that enable teams to easily ingest, transform, and consume data.
Contribute to data observability, governance, documentation, and platform visibility
Drive strong engineering practices
Evaluate and integrate emerging technologies that enhance scalability, reliability, and AI integration in the platform.
Requirements:
7+ years experience building/operating data platforms
Strong Python programming skills
Proven experience with cloud data lakes and warehouses (Databricks, Snowflake, or equivalent).
Data orchestration experience (Airflow)
Solid understanding of AWS services
Proficiency with relational databases and search/analytics stores
Experience designing complex data pipelines, managing data quality, lineage, and observability in production.
Familiarity with CI/CD, GitOps, and IaC
Excellent understanding of distributed systems, data partitioning, and schema evolution.
Strong communication skills, ability to document and present technical designs clearly.
Advantages:
Experience with vector databases and graph databases
Experience integrating AI/LLM workloads into data pipelines (feature stores, retrieval pipelines, embeddings).
Familiarity with event streaming and CDC patterns.
Experience with data catalog, lineage, or governance tools
Knowledge of monitoring and alerting stacks
Hands-on experience with multi-source data product architectures.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8470086
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced Data Engineering Team Leader.
In this role, you will lead and strengthen our Data Team, drive innovation, and ensure the robustness of our data and analytics platforms.
A day in the life and how youll make an impact:
Drive the technical strategy and roadmap for the data engineering function, ensuring alignment with overall business objectives.
Own the design, development, and evolution of scalable, high-performance data pipelines to enable diverse and growing business needs.
Establish and enforce a strong data governance framework, including comprehensive data quality standards, monitoring, and security protocols, taking full accountability for data integrity and reliability.
Lead the continuous enhancement and optimization of the data analytics platform and infrastructure, focusing on performance, scalability, and cost efficiency.
Champion the complete data lifecycle, from robust infrastructure and data ingestion to detailed analysis and automated reporting, to maximize the strategic value of data and drive business growth.
Requirements:
5+ years of Data Engineering experience (preferably in a startup), with a focus on designing and implementing scalable, analytics-ready data models and cloud data warehouses (e.g., BigQuery, Snowflake).
Minimum 3 years in a leadership role, with a proven history of guiding teams to success.
Expertise in modern data orchestration and transformation frameworks (e.g., Airflow, DBT).
Deep knowledge of databases (schema design, query optimization) and familiarity with NoSQL use cases.
Solid understanding of cloud data services (e.g., AWS, GCP) and streaming platforms (e.g., Kafka, Pub/Sub).
Fluent in Python and SQL, with a backend development focus (services, APIs, CI/CD).
Excellent communication skills, capable of simplifying complex technical concepts.
Experience with, or strong interest in, leveraging AI and automation for efficiency gains.
Passionate about technology, proactively identifying and implementing tools to enhance development velocity and maintain high standards.
Adaptable and resilient in dynamic, fast-paced environments, consistently delivering results with a strong can-do attitude.
B.Sc. in Computer Science / Engineering or equivalent.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8469343
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
Location: More than one
Job Type: Full Time
We are looking for an expert Data Engineer to build and evolve the data backbone for our R&D telemetry and performance analytics ecosystem. Responsibilities include processing raw, large quantities of data from live systems at the cluster level: hardware, communication units, software, and efficiency indicators. Youll be part of a fast paced R&D organization, where system behavior, schemas, and requirements evolve constantly. Your mission is to develop flexible, reliable, and scalable data handling pipelines that can adapt to rapid change and deliver clean, trusted data for engineers and researchers.
What youll be doing:
Build flexible data ingestion and transformation frameworks that can easily handle evolving schemas and changing data contracts
Develop and maintain ETL/ELT workflows for refining, enriching, and classifying raw data into analytics-ready form
Collaborate with R&D, hardware, DevOps, ML engineers, data scientists and performance analysts to ensure accurate data collection from embedded systems, firmware, and performance tools
Automate schema detection, versioning, and validation to ensure smooth evolution of data structures over time
Maintain data quality and reliability standards, including tagging, metadata management, and lineage tracking
Enable self-service analytics by providing curated datasets, APIs, and Databricks notebooks.
Requirements:
B.Sc. or M.Sc. in Computer Science, Computer Engineering, or a related field
5+ years of experience in data engineering, ideally in telemetry, streaming, or performance analytics domains
Confirmed experience with Databricks and Apache Spark (PySpark or Scala)
Understanding of streaming processes and their applications (e.g., Apache Kafka for ingestion, schema registry, event processing)
Proficiency in Python and SQL for data transformation and automation
Shown knowledge in schema evolution, data versioning, and data validation frameworks (e.g., Delta Lake, Great Expectations, Iceberg, or similar)
Experience working with cloud platforms (AWS, GCP, or Azure) AWS preferred
Familiarity with data orchestration tools (Airflow, Prefect, or Dagster)
Experience handling time-series, telemetry, or real-time data from distributed systems
Ways to stand out from the crowd:
Exposure to hardware, firmware, or embedded telemetry environments
Knowledge of real-time analytics frameworks (Spark Structured Streaming, Flink, Kafka Streams)
Understanding of system performance metrics (latency, throughput, resource utilization)
Experience with data cataloging or governance tools (DataHub, Collibra, Alation)
Familiarity with CI/CD for data pipelines and infrastructure-as-code practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8465345
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462182
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an HPE office.
Job Description:
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutionsincluding the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
610+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8461496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
16/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly motivated and experienced Data Engineering Team Lead to guide a global team responsible for building and maintaining the data infrastructure that powers our products. This team collaborates closely with software engineers, data scientists, product managers, analysts, and external partners to deliver scalable, high-quality data solutions that drive business impact.

Key Responsibilities

Lead a global team of software engineers, centered in Israel, operating under the Agile Scrum methodology.
Own the teams execution, quality, and alignment to data architecture and company-wide engineering standards.
Work together with product managers to define priorities for each sprint and align engineering efforts to business goals.
Guide the teams involvement in cross-functional projects alongside Software, ML, Analytics, and Partner teams.
Ensure secure, reliable, and efficient access to internal and third-party data sources across all products.
Actively participate in roadmap planning, resource allocation, and technical design reviews.
Provide mentorship, technical guidance, and growth opportunities to team members.
Requirements:
5+ years of hands-on experience in data engineering, with at least 2 years in a team or technical leadership role.
Strong technical background in building and maintaining production-grade data pipelines using tools like Airflow, Spark, Kafka, or similar.
Excellent communication skills, with the ability to clearly convey technical ideas to both technical and non-technical stakeholders.
Strong cross-functional collaboration skills, including working directly with product managers on sprint planning and aligning engineering with business priorities.
Proven experience working with both structured and unstructured data, including RDBMS and NoSQL solutions.
Deep understanding of cloud-based data platforms (preferably AWS) and infrastructure as code.

Nice to have:
Familiarity with ML pipelines, feature stores, and supporting data scientists in production environments.
Familiarity with Generative AI tools such as Cursor, GitHub Copilot, Claude Code or similar technologies.
Strong grasp of data security, reliability, and compliance practices.
Exposure to real-time or near-real-time data systems in mobility, IoT, or similar domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8460082
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
the Senior Data Engineer position is a central role in the Tech Org. The Data Engineering (DE) team is a Change Agent Team that plays a significant role in the ongoing (at advanced stages) migration of to cloud technologies. The ideal candidate is a senior data engineer with a strong technical background in data infrastructure, data architecture design and robust data pipes building. The candidate must also have collaborative abilities to interact effectively with Product managers, Data scientists, Onboarding engineers, and Support staff.
Responsibilities:
Deploy and maintain critical data pipelines in production.
Drive strategic technological initiatives and long-term plans from initial exploration and POC to going live in a hectic production environment.
Design infrastructural data services, coordinating with the Architecture team, R&D teams, Data Scientists, and product managers to build scalable data solutions.
Work in Agile process with Product Managers and other tech teams.
End-to-end responsibility for the development of data crunching and manipulation processes within the product.
Design and implement data pipelines and data marts.
Create data tools for various teams (e.g., onboarding teams) that assist them in building, testing, and optimizing the delivery of the product.
Explore and implement new data technologies to support data infrastructure.
Work closely with the core data science team to implement and maintain ML features and tools.
Requirements:
B.Sc. in Computer Science or equivalent.
7+ years of extensive SQL experience (preferably working in a production environment)
Experience with programming languages (preferably, Python) a must!
Experience with "Big Data" environments, tools, and data modeling (preferably in a production environment).
Strong capability in schema design and data modeling.
Understanding of micro-services architecture.
Familiarity with Airflow, ETL tools, Snowflake, and MSSQL.
Quick, self-learning and good problem-solving capabilities.
Good communication skills and collaborative.
Process and detailed oriented.
Passion to solve complex data problems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8459882
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו
ישנן -73 משרות במרכז אשר לא צויינה בעבורן עיר הצג אותן >