רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Solve challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Improve logging
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
Influence product and cross-functional teams to identify data opportunities to drive impact.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
2+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
2+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478332
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
4+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478330
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights visually in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
7+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
7+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478326
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Data Infra Engineer. You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.
Responsibilities
Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 5+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
Preferred Qualifications
Containerization & DevOps: Familiarity with Docker, Kubernetes (K8S), and Terraform.
Modern Data Stack: Familiarity with the following tools is an advantage: Kafka, Databricks, Airflow, Snowflake, MongoDB, Open Table Format (Iceberg/ Delta)
ML/AI Infrastructure: Experience building and designing ML/AI-driven production infrastructures and pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478237
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a skilled and motivated Data Engineer with expertise in Elasticsearch, cloud technologies, and Kafka. As a data engineer, you will be responsible for designing, building and maintaining scalable and efficient data pipelines that will support our organization's data processing needs.
The role will entail:
Design and develop data platforms based on Elasticsearch, Databricks, and Kafka
Build and maintain data pipelines that are efficient, reliable and scalable
Collaborate with cross-functional teams to identify data requirements and design solutions that meet those requirements
Write efficient and optimized code that can handle large volumes of data
Implement data quality checks to ensure accuracy and completeness of the data
Troubleshoot and resolve data pipeline issues in a timely manner.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field
3+ years of experience in data engineering
Expertise in Elasticsearch, cloud technologies (such as AWS, Azure, or GCP), Kafka and Databricks
Proficiency in programming languages such as Python, Java, or Scala
Experience with distributed systems, data warehousing and ETL processes
Experience with Container environment such AKS\EKS\OpenShift is a plus
high security clearance is a plus.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8477781
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/12/2025
Job Type: Full Time
Welcome to Chargeflow Chargeflow is at the forefront of fintech + AI innovation, backed by leading venture capital firms. Our mission is to build a fraud-free global commerce ecosystem by leveraging the newest technology, freeing online businesses to focus on their core ideas and growth. We are building the future, and we need you to help shape it. Who We're Looking For - The Dream Maker We are seeking an experienced Senior Data Platform Engineer to design and scale the robust, cost-efficient infrastructure powering our groundbreaking fraud prevention solution. In this role, you will architect distributed systems and cloud-native technologies to safeguard our clients' revenue while driving technical initiatives that align with business objectives and operational efficiency. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena Infrastructure & FinOps: Design scalable, robust backend services while owning cloud cost management to ensure maximum resource efficiency. High-Performance Engineering: Architect distributed systems and real-time pipelines capable of processing millions of daily transactions. Operational Excellence: Champion Infrastructure-as-Code (IaC), security, and observability best practices across the R&D organization. Leadership: Lead technical initiatives, mentor engineers, and drive cross-functional collaboration to solve complex infrastructure challenges.
Requirements:
What It Takes - Must haves: Experience: 5+ years of experience in data platform engineering, backend engineering, or infrastructure engineering. Language Proficiency: Specific, strong proficiency in Python & software engineering principles. Cloud Native: Extensive experience with AWS, GCP, or Azure and cloud-native architectures. Databases: Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases, including performance optimization, cost tuning, and scaling strategies. Architecture: Strong experience designing and implementing RESTful APIs, microservices architecture, and event-driven systems. Containerization & IaC: Experience with containerization technologies (Docker, Kubernetes) and Infrastructure-as-Code (e.g., Terraform, CloudFormation). System Design: Strong understanding of distributed systems principles, concurrency, and scalability patterns. Nice-to-Haves Strong Advantage: Apache Iceberg (Lakehouse/S3/Glue), Apache Spark (Optimization), Message Queues (Kafka/Kinesis), Graph Databases (Experience with schema design, cluster setup, and ongoing management of engines like Amazon Neptune or Neo4j). Tech Stack: Orchestration (Temporal/Dagster/Airflow), Modern Data Stack (dbt/DuckDB), Streaming (Flink/Kafka Streams), Observability (Datadog/Grafana). Skills: FinOps (Cost Explorer/Spot instances), CI/CD & DevOps, Data Governance (GDPR), Pydantic, and Mentorship/Leadership experience. Our Story Chargeflow is a leading force in fintech innovation, tackling the pervasive issue of chargeback fraud that undermines online businesses. Born from a deep passion for technology and a commitment to excel in eCommerce and fintech, we've developed an AI-driven solution aimed at combating the frustrations of credit card disputes. Our diverse expertise in fintech, eCommerce, and technology positions us as a beacon for merchants facing unjust chargebacks, supported by a unique success-based approach. Backed by $49M led by Viola Growth, OpenView, Sequoia Capital and other top tier global investors, Chargeflow has embarked on a product-led growth journey. Today, we represent a tight-knit community of passionate individuals and entrepreneurs, united in our mission to revolutionize eCommerce and fight against chargeback fraud, marking us as pioneers in protecting online business revenues.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8476565
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We're seeking a Data Engineer to architect and develop sophisticated data solutions using advanced Spark, PySpark, Databricks and EMR implementations in our mission to transform the cyber-security breach readiness and response market.

Join us in crafting cutting-edge solutions for the cyber world using Spark/PySpark ETLs and data flow processes. Dive into the realm of multi-Cloud environments while collaborating closely with investigators to fine-tune PySpark performance. Harness the power of top-notch technologies like Databricks to elevate our technical projects, scaling them for efficiency. Embrace innovation as you research and implement new techniques. Evolve with us as a key member of the R&D team.

Technical Impact:
Design and implement complex data processing architectures for cloud security analysis.
Optimize and scale critical PySpark workflows across multi-cloud environments.
Develop innovative solutions for processing and analyzing massive security datasets.
Drive technical excellence through sophisticated ETL implementations.
Contribute to architectural decisions and technical direction.

Core Responsibilities:
Build robust, scalable data pipelines for security event processing.
Optimize performance of large-scale PySpark operations.
Implement advanced data solutions using Databricks and cloud-native technologies.
Research and prototype new data processing methodologies.
Provide technical guidance and best practices for data engineering initiatives.

Location: Tel Aviv, IL.
Hybrid work environment.
Requirements:
Preferred Qualifications:
Experience with security-focused data solutions.
Deep expertise with Splunk and AWS services (S3, SQS, SNS, Stream).
Advanced understanding of distributed systems.
Strong Linux systems knowledge.
Experience with real-time data processing architectures.

Who You Are:
4+ years of hands-on data engineering experience in cloud-based SaaS environments.
Deep expertise in PySpark, Python, and SQL optimization.
Advanced knowledge of AWS, Azure, and GCP cloud architectures.
Proven track record implementing production-scale data systems.
Extensive experience with distributed computing and big data processing.
Strong collaboration skills and technical communication abilities.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8476340
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our BI Team within the R&D department. This role is all about building and maintaining robust data pipelines and enabling data-driven decision-making across all departments.

Responsibilities:
Develop and maintain ETL processes and data pipelines to ensure smooth, accurate data flow.
Designed, built, and managed Airflow DAGs to orchestrate complex data workflows and ensure reliable execution of ETL and analytics tasks
Design and optimize relational databases, including conceptual and physical data modeling (ERD), schema design, and performance tuning to support scalable analytics and business applications.
Collaborate with cross-functional teams to translate business needs into technical solutions.
Build and maintain automation processes powered by algorithms to optimize workflows for reducing manual effort.
Build and maintain custom data scripts for integrations with brand and publisher partners
Support data visualization efforts through BI tools.
Gain in-depth understanding of discrepancy processes across departments and improve data accuracy.
Requirements:
3+ years of hands-on experience in data engineering or BI development.
Strong SQL skills (complex queries, stored procedures).
Practical experience with ETL processes and data pipelines.
Experience with Airflow, Dagster, or similar orchestration tools.
1+ Hands-on experience with Python.
Knowledge of database architecture (ERD, OLTP/OLAP models).
Strong analytical mindset with the ability to translate business requirements into data solutions.
B.Sc. degree in Information Systems Engineering, Industrial Engineering, or a related field.
Experience with DBT, BigQuery, AWS - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8476306
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are building next-generation GenAI security intelligence and SaaS Security Posture Management (SSPM) solutions that protect enterprises worldwide. If you enjoy turning complex security data into actionable insights and delivering end-to-end systems, this role is for you.

About the role
You will own, build, and maintain our Pythonic data pipeline and enrichment system on top of PostgreSQL and BigQuery. This system powers security analytics, detections, and intelligence. A core part of your job will be to design and implement new components, improve reliability and performance, and ensure data quality and observability.



Key Responsibilities
Own, build, and maintain production data pipelines and enrichment services using Python, PostgreSQL, and BigQuery.
Architect data systems end to end, including design, deployment, monitoring, and iterative improvement.
Analyze complex security datasets and SaaS telemetry to uncover risks, patterns, and opportunities.
Research emerging threat vectors and contribute to automated intelligence feeds and published reports.
Work across security domains such as SSPM, Shadow Integrations, DLP, and GenAI Protection.
Requirements:
4+ years in data-focused roles (engineering, analytics, science)
Strong SQL and Python skills
Experience with cloud platforms (GCP, AWS, Azure) and modern data warehouses (BigQuery, Databricks)
Proven ability to build data infrastructure from scratch
Ability to turn complex data into actionable insights
Fast learner with systematic problem-solving skills
Comfortable with technical research in unfamiliar domains
Independent and determined, with strong collaboration skills
BSc in Computer Science, Mathematics, Statistics, or related field
Excellent communication skills for technical and non-technical audiences
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8473453
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Merkaz
Job Type: Full Time
we are looking for a Data Engineer.
What youll do:
Design, build, and optimize large-scale data pipelines and workflows for both batch and real-time processing.
Architect and maintain Airflow-based orchestration frameworks to manage complex data dependencies and data movement.
Develop high-quality, maintainable data transformation and integration processes across diverse data sources and domains.
Lead the design and implementation of scalable, cloud-based data infrastructure ensuring reliability, performance, and cost efficiency.
Drive data modeling and data architecture practices to ensure consistency, reusability, and quality across systems.
Collaborate closely with Product, R&D, BizDev, and Data Science teams to define data requirements, integrations, and delivery models.
Own the technical roadmap for key data initiatives, from design to production deployment.
Requirements:
6+ years of experience as a Data Engineer working on large-scale, production-grade systems.
Proven experience architecting and implementing data pipelines and workflows in Airflow - must be hands-on and design-level proficient.
Strong experience with real-time or streaming data processing (Kafka, Event Hubs, Kinesis, or similar).
Advanced proficiency in Python for data processing and automation.
Strong SQL skills and deep understanding of data modeling, ETL/ELT frameworks, and DWH methodologies.
Experience with cloud-based data ecosystems (Azure, AWS, or GCP) and related services (e.g., Snowflake, BigQuery, Redshift).
Experience with Docker, Kubernetes, and modern CI/CD practices.
Excellent communication and collaboration skills with experience working across multiple stakeholders and business units.
A proactive, ownership-driven approach with the ability to lead complex projects end-to-end.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8473165
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our team and help shape a modern, scalable data platform. Youll work with cutting-edge AWS technologies, Spark, and Iceberg to build pipelines that keep our data reliable, discoverable, and ready for analytics.

Whats the Job?
Design and maintain scalable data pipelines on AWS (EMR, S3, Glue, Iceberg).
Transform raw, semi-structured data into analytics-ready datasets using Spark.
Automate schema management, validation, and quality checks.
Optimize performance and costs with smart partitioning, tuning, and monitoring.
Research and evaluate new technologies, proposing solutions that improve scalability and efficiency.
Plan and execute complex data projects with foresight and attention to long-term maintainability.
Collaborate with engineers, analysts, and stakeholders to deliver trusted data for reporting and dashboards.
Contribute to CI/CD practices, testing, and automation.
Requirements:
Requirements:
Strong coding skills in Python (PySpark, pandas, boto3).
Experience with big data frameworks (Spark) and schema evolution.
Knowledge of lakehouse technologies (especially Apache Iceberg).
Familiarity with AWS services: EMR, S3, Glue, Athena.
Experience with orchestration tools like Airflow.
Solid understanding of CI/CD and version control (GitHub Actions).
Ability to research, evaluate, and plan ahead for new solutions and complex projects.

Nice to have:
Experience with MongoDB or other NoSQL databases.
Experience with stream processing (e.g., Kafka, Kinesis, Spark Structured Streaming).
Ability to create visualized dashboards and work with Looker (Enterprise).
Infrastructure-as-code (Terraform).
Strong debugging and troubleshooting skills for distributed systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471922
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
The Performance Marketing Analytics team is seeking a highly skilled Senior Data platform Engineer to establish, operate, and maintain our dedicated Performance Marketing Data Mart within the Snowflake Cloud Data Platform. This is a critical, high-autonomy role responsible for the end-to-end data lifecycle, ensuring data quality, operational excellence, and governance within the new environment. This role will directly enable the Performance Marketing team's vision for data-driven marketing and increased ownership of our analytical infrastructure
Responsibilities
Snowflake Environment Management
Administer the Snowflake account (roles, permissions, cost monitoring, performance tuning).
Implement best practices for security, PII handling, and data governance.
Act as the subject matter expert for Snowflake within the team.
DevOps & Model Engineering
Establish and manage the development and production environments.
Maintain CI/CD pipeline using GitLab to automate the build, test, and deployment process.
Implement normal engineering practices such as code testing and commit reviews to prevent tech debt.
Data Operations & Reliability
Monitor pipeline executions to ensure timely, accurate, and reliable data.
Set up alerting, incident management, and SLAs for marketing data operations.
Troubleshoot and resolve platform incidents quickly to minimize business disruption.
Tooling & Integration
Support the integration of BI, monitoring, and orchestration tools
Evaluate and implement observability and logging solutions for platform reliability.
Governance & Compliance
Ensure alignment with Entain data governance and compliance policies.
Document operational procedures, platform configurations, and security controls.
Act as the team liaison with procurement, infrastructure, and security teams for platform-related topics.
Collaboration & Enablement
Work closely with BI, analysts and data engineers, ensuring the platform supports their evolving needs.
Provide guidance on best practices for query optimization, cost efficiency, and secure data access.
Requirements:
At least 4 years of experience in data engineering, DevOps, or data platform operations roles.
Expert Proficiency in Snowflake: 2+ years of Deep, hands-on experience with Snowflake setup, administration, security, warehouse management, performance tuning and cost management.
Programming: Expertise in SQL, and proficiency in Python for data transformation and operational scripting
Experience implementing CI/CD pipelines (preferably GitLab) for data/analytics workloads
Hands-on experience with modern data environments (cloud warehouses, dbt, orchestration tools)
Ability to work effectively in a fast-paced and dynamic environment
Bachelor's degree in a relevant field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471910
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an exceptional Big Data Engineer to join our R&D team. This is a unique opportunity for a recent graduate or early-career engineer to enter the world of Big Data. You will work alongside the best engineers and scientists in the industry to develop systems that process and analyze data from around the digital world.
So what will you be doing all day?
Learn and Build: Assist in the design and implementation of high-scale systems using a variety of technologies, mentored by senior engineers.
Collaborate: Work in a data research team alongside data engineers, data scientists, and data analysts to tackle data challenges.
Optimize: Help improve the existing infrastructure of code and data pipelines and learn how to identify and eliminate bottlenecks.
Innovate: Experiment with various technologies in the domain of Machine Learning and big data processing.
Monitor: Assist in maintaining monitoring infrastructure to ensure smooth data ingestion and calculation.
Requirements:
Education: BSc degree in Computer Science, Engineering, or a related technical field (Recent graduates are welcome).
Programming: Proficiency in one or more of the following languages: Python, Java, or Scala.
Fundamentals: Strong grasp of Computer Science fundamentals, including Data Structures, Design Patterns, and Object-Oriented Programming.
Soft Skills: Excellent communication skills with the ability to engage in dialogue within data teams.
Mindset: Passionate about data , holds a strong sense of ownership , and comfortable in a fast-paced environment.
Advantages (Not required, but great to have)
Previous internship or work experience in software or data engineering.
Basic understanding or familiarity with CI/CD practices and Git.
Familiarity with containerization technologies like Docker and Kubernetes.
Familiarity with Cloud providers (AWS / Azure / GCP) or Big Data frameworks (Spark, Airflow, Kafka).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471696
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Big Data Engineer to develop and integrate systems that retrieve, process and analyze data from around the digital world, generating customer-facing data. This role will report to our Team Manager, R&D.
Why is this role so important?
We are a data-focused company, and data is the heart of our business.
As a big data engineer developer, you will work at the very core of the company, designing and implementing complex high scale systems to retrieve and analyze data from millions of digital users.
Your role as a big data engineer will give you the opportunity to use the most cutting-edge technologies and best practices to solve complex technical problems while demonstrating technical leadership.
So, what will you be doing all day?
Your role as part of the R&D team means your daily responsibilities may include:
Design and implement complex high scale systems using a large variety of technologies.
You will work in a data research team alongside other data engineers, data scientists and data analysts. Together you will tackle complex data challenges and bring new solutions and algorithms to production.
Contribute and improve the existing infrastructure of code and data pipelines, constantly exploring new technologies and eliminating bottlenecks.
You will experiment with various technologies in the domain of Machine Learning and big data processing.
You will work on a monitoring infrastructure for our data pipelines to ensure smooth and reliable data ingestion and calculation.
Requirements:
Passionate about data.
Holds a BSc degree in Computer Science\Engineering or a related technical field of study.
Has at least 4 years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471314
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an Analytics Engineer to join our team and play a key role in shaping how data drives our product and business decisions.
This role is perfect for someone who enjoys working at the intersection of data, product, and strategy. While Product Analysts focus on turning data into insights, youll focus on building the strong data foundations that make those insights possible. You wont just run queries; you will design the data architecture and own the "source of truth" tables that power our strategic decision-making.
Youll work closely with our Growth and Solutions teams, helping them move faster and smarter by making sure the data behind Generative AI, Data-as-a-Service (DaaS), and advanced product models is clear, reliable, and easy to use. Your work will have a direct impact on how we build, scale, and innovate our products.
What Youll Do
Define the Source of Truth: Take raw, complex data and transform it into clean, well-structured tables that Product Analysts and Business Leads can use for high-stakes decision-making.
Translate Strategy into Logic: Work with Product, Growth, and Solutions leads to turn abstract concepts (like "Activation," "Retention," or "Feature Adoption") into precise SQL definitions and automated datasets.
Enable High-Tech Initiatives: Partner with our AI and DaaS specialists to ensure they have the structured data foundations they need to build models and external data products.
Optimize for Usability: Ensure our data is not just "there," but easy to use. You will design the data logic that powers our most important product dashboards and growth funnels.
Maintain Data Integrity: Act as the guardian of our metrics. You will ensure that the numbers used across our product and business reports are consistent, reliable, and logical.
Requirements:
Expert SQL Mastery: You are a SQL power-user. You enjoy solving complex logic puzzles using code and care deeply about query efficiency and data accuracy.
The "Bridge" Mindset: You can sit in a meeting with a Product Manager to understand a business need, and then translate that into a technical data structure that serves that need.
Logical Architecture: You have a natural talent for organizing information. You know how to build a table that is intuitive and easy for other analysts to query.
Product & Business Acumen: You understand SaaS metrics (ARR, funnels, activation, etc.) and how data logic impacts product growth and strategy.
Experience with Analytics Tools: Proficiency in BI tools (Looker, Tableau, etc.) and a strong understanding of how data flows from technical logs to the end-user interface.
Degree: B.Sc. in Industrial Engineering, Information Systems, Economics, Computer Science or a related quantitative field.
Experience: 1+ years of prior experience in a relevant analytics/technical role.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471303
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו