משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
להשיב נכון: "ספר לי על עצמך"
שימו בכיס וצאו לראיון: התשובה המושלמת לשאלה שמצ...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
About the Role
Our Senior Data Engineer will play an essential role by building the underlying infrastructures, collecting, storing, processing and analyzing large sets of data, while collaborating with researchers, architects, and engineers, in order to design and build high-quality data processing for our flows.
In this role, you are responsible for end-to-end development of the data pipeline and data models, working with major data flow that includes structured and unstructured data. You will also hold responsibility for operating parts of our production system. Your focus will be on developing and integrating systems that retrieve and analyzing data that influence people's lives. This role for our Tel Aviv office is a hybrid role working at least two days per week in the office.
Our Technologies stack: Python, Spark, Airflow, DBT, Kafka, AWS, Snowflake, Docker, Kubernetes, MongoDB, Redis, Postgres, Elasticsearch, and more.
The ideal candidate will be
A technology enthusiast - who loves data and get shiver excitement from tech innovations.
Desire to know how things work and a greater desire to improve them.
Intellectual curiosity to find unusual ways to solve problems.
Comfortable taking on challenges and learning new technologies.
Comfortable working in a fast-paced dynamic environment.
Requirements:
6+ years of experience in designing and implementing server-side Data solutions.
Highly experienced with CI/CD pipelines and using Terraform in data platforms.
Highly experienced with Spark and Python.
Experience with AWS ecosystem.
Experience with DWH solutions (e.g. Snowflake, Redshift, Databricks).
Experience with Kubernetes in Production.
Experience implementing GenAI into data flows - Advantage.
Experience with Apache Airflow - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8534209
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are building the next generation of digital heart-health products and our data platform is the foundation.
Were looking for a Data Platform Team Lead to own and evolve an AI-first, cloud-native data platform that already:
Serves 100+ data users across the company
Powers ML models impacting thousands of users every day
Supports production systems in a company that literally saves lives
You will lead a growing team of 4 data engineers and 1 BI developer, and work in close, day-to-day partnership with Product, Analytics, Data Science & Engineering
Our future is serving millions of users across multiple products in the heart-health ecosystem. This role owns the platform that will scale us there. We are intentionally building an AI-first / agentic data platform.
That means:
Automating table creation, validation, and testing
Agent-driven monitoring for data quality, freshness, and failures
Using agents to generate and maintain documentation
Reducing manual operational overhead so humans focus on architecture, leverage, and product impact
You will have full organizational support to rethink how a modern data platform should work in an AI-native environment, not incremental improvements, but fundamental design decisions.
Our Technologies stack: Python, Spark, Airflow, DBT, Kafka, AWS (Glue, EMR, S3, Athena and more), Snowflake, Docker, Kubernetes, MongoDB, Redis, Postgres, Elasticsearch, and evolving
Responsibilities:
Platform Leadership & Team Management: Lead, mentor, and grow a team of senior data engineers and BI developers. Set a high bar for technical quality, ownership, and delivery.
Core Data Platform Architecture: Own the design, evolution, and reliability of our cloud-native data platform, balancing scalability, cost, security, and developer velocity.
Deep Collaboration with R&D: Work closely with product, engineering, and ML teams to ensure the data platform enables fast experimentation, production ML, and new product development.
Production-Grade Data Systems: Oversee end-to-end data pipelines, streaming and batch processing, semantic layers, and analytics foundations that serve the entire organization.
Operational Excellence: Ensure data quality, freshness, observability, and incident response meet the standards of a mission-critical system.
Requirements:
7+ years of experience designing and operating production-scale data systems
4+ years of experience leading data or platform engineering teams
Proven experience building and operating cloud-native data platforms on AWS and Snowflake
Deep understanding of trade-offs across reliability, cost, performance, and security
Experience owning shared platforms supporting multiple teams and products
Automation and AI are treated as practical engineering leverage rather than buzzwords
Strong experience with the AWS ecosystem and Snowflake
Hands-on experience with Python and modern data tooling
Experience with distributed systems, including Spark, streaming, and large-scale batch processing
Production experience with Kubernetes
Experience collaborating closely with analytics, engineering, data science, and product teams
Comfort leading architectural discussions and making complex technical trade-offs
Experience working with US-based teams is considered an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8534205
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We act as the central nervous system for engineering, enabling platform teams to unify their stack and expose it as a governed layer through golden paths for developers and AI agents.
By combining rich engineering context, workflows, and actions, we help organizations transition from manual processes to autonomous, AI-assisted engineering workflows while maintaining control and accountability.
As a product-led company, we believe in building world-class platforms that fundamentally shape how modern engineering organizations operate.
What youll do:
Lead the design and development of scalable and efficient data lake solutions that account for high-volume data coming from a large number of sources both pre-determined and custom.
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs.
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into a data lake that will serve our company's users.
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency.
Collaborate with cross-functional teams (product, analytics, and R&D) to enhance our company's data solutions.
Who youll work with:
Youll be joining a collaborative and dynamic team of talented and experienced developers where creativity and innovation thrive.
You'll closely collaborate with our dedicated Product Managers and Designers, working hand in hand to bring our developer portal product to life.
Additionally, you will have the opportunity to work closely with our customers and engage with our product community. Your insights and interactions with them will play an important role to ensure we deliver the best product possible.
Together, we'll continue to empower platform engineers and developers worldwide, providing them with the tools they need to create seamless and robust developer portals. Join us in our mission to revolutionize the developer experience!
Requirements:
5+ years of experience in a Data Engineering role
Expertise in building scalable pipelines and ETL/ELT processes, with proven experience with data modeling
Expert-level proficiency in SQL and experience with large-scale datasets
Strong experience with Snowflake
Strong experience with cloud data platforms and storage solutions such as AWS S3, or Redshift
Hands-on experience with ETL/ELT tools and orchestration frameworks such as Apache Airflow and dbt
Experience with Python and software development
Strong analytical and storytelling capabilities, with a proven ability to translate data into actionable insights for business users
Collaborative mindset with experience working cross-functionally with data engineers and product managers
Excellent communication and documentation skills, including the ability to write clear data definitions, dashboard guides, and metric logic
Advantages:
Experience in NodeJs + Typescript
Experience with streaming data technologies such as Kafka or Kinesis
Familiarity with containerization tools such as Docker and Kubernetes
Knowledge of data governance and data security practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533929
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Data Engineer to join our Data team,
The Data team develops and maintains the infrastructure for internal data and product analytics,
In this role, you will design and manage complex data pipelines and work closely with data analysts,
software engineers, and other stakeholders to continuously improve data processes and solutions.
What you will do:
Architect, develop, and maintain scalable, end-to-end data pipelines from diverse data sources
Monitor and maintain data systems, ensuring uptime, reliability, and stability
Provide technical expertise and insights to shape overall data strategy and best practices
Strong team player with excellent communication skills.
Requirements:
5+ years of professional experience in data engineering, with a proven track record in building and managing large-scale data pipelines
3+ years experience with Python
Demonstrated expertise in designing and implementing data lake/warehouse solutions
Strong background in ETL processes, data integration, and big data technologies
Proficiency in data modeling, business logic processes, and data warehouse design
Preferred Qualifications
Background in backend development
Experience with Elasticsearch
Familiarity with modern data processing frameworks and tools such as Spark, Kubernetes, Docker
Bachelors degree in computer science, Industrial Engineering or a related analytical discipline (or equivalent experience).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533855
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an Analytics Engineer to join our team and play a key role in shaping how data drives our product and business decisions.
This role is perfect for someone who enjoys working at the intersection of data, product, and strategy. While Product Analysts focus on turning data into insights, youll focus on building the strong data foundations that make those insights possible. You wont just run queries; you will design the data architecture and own the "source of truth" tables that power our strategic decision-making.
Youll work closely with our Growth and Solutions teams, helping them move faster and smarter by making sure the data behind Generative AI, Data-as-a-Service (DaaS), and advanced product models is clear, reliable, and easy to use. Your work will have a direct impact on how we build, scale, and innovate our products.
What Youll Do:
Define the Source of Truth: Take raw, complex data and transform it into clean, well-structured tables that Product Analysts and Business Leads can use for high-stakes decision-making.
Translate Strategy into Logic: Work with Product, Growth, and Solutions leads to turn abstract concepts (like "Activation," "Retention," or "Feature Adoption") into precise SQL definitions and automated datasets.
Enable High-Tech Initiatives: Partner with our AI and DaaS specialists to ensure they have the structured data foundations they need to build models and external data products.
Optimize for Usability: Ensure our data is not just "there," but easy to use. You will design the data logic that powers our most important product dashboards and growth funnels.
Maintain Data Integrity: Act as the guardian of our metrics. You will ensure that the numbers used across our product and business reports are consistent, reliable, and logical.
Requirements:
Expert SQL Mastery: You are a SQL power-user. You enjoy solving complex logic puzzles using code and care deeply about query efficiency and data accuracy.
The "Bridge" Mindset: You can sit in a meeting with a Product Manager to understand a business need, and then translate that into a technical data structure that serves that need.
Logical Architecture: You have a natural talent for organizing information. You know how to build a table that is intuitive and easy for other analysts to query.
Product & Business Acumen: You understand SaaS metrics (ARR, funnels, activation, etc.) and how data logic impacts product growth and strategy.
Experience with Analytics Tools: Proficiency in BI tools (Looker, Tableau, etc.) and a strong understanding of how data flows from technical logs to the end-user interface.
Degree: B.Sc. in Industrial Engineering, Information Systems, Economics, Computer Science or a related quantitative field.
Experience: 1+ years of prior experience in a relevant analytics/technical role
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531875
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our team and help shape a modern, scalable data platform. Youll work with cutting-edge AWS technologies, Spark, and Iceberg to build pipelines that keep our data reliable, discoverable, and ready for analytics.

Whats the Job?
Design and maintain scalable data pipelines on AWS (EMR, S3, Glue, Iceberg).
Transform raw, semi-structured data into analytics-ready datasets using Spark.
Automate schema management, validation, and quality checks.
Optimize performance and costs with smart partitioning, tuning, and monitoring.
Research and evaluate new technologies, proposing solutions that improve scalability and efficiency.
Plan and execute complex data projects with foresight and attention to long-term maintainability.
Collaborate with engineers, analysts, and stakeholders to deliver trusted data for reporting and dashboards.
Contribute to CI/CD practices, testing, and automation.
Requirements:
Requirements
Strong coding skills in Python (PySpark, pandas, boto3).
Experience with big data frameworks (Spark) and schema evolution.
Knowledge of lakehouse technologies (especially Apache Iceberg).
Familiarity with AWS services: EMR, S3, Glue, Athena.
Experience with orchestration tools like Airflow.
Solid understanding of CI/CD and version control (GitHub Actions).
Ability to research, evaluate, and plan ahead for new solutions and complex projects.

Nice to have:
Experience with MongoDB or other NoSQL databases.
Experience with stream processing (e.g., Kafka, Kinesis, Spark Structured Streaming).
Ability to create visualized dashboards and work with Looker (Enterprise).
Infrastructure-as-code (Terraform).
Strong debugging and troubleshooting skills for distributed systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531804
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Engineer in the Data Collection Ingest team to contribute to the design and coding of data ingestion services & pipelines. This role involves working as part of a team that handles millions of requests per minute across multiple servers, and is responsible for a wide-range of data pipelines, processing billions of events each day.
Why is this role so important?
As the most trusted platform for measuring online behavior, millions of people rely on insights daily as the ground truth for their knowledge of the digital world. Producing these insights requires large scale raw data to be ingested reliably in high scale to provide stable signals for analysis. As a Data Engineer you will have the opportunity to perform hands-on work and own raw data ingestion pipeline end-to-end. Your work will have a direct impact on the quality and reliability of our data and the insights that our products are delivering to our customers.
Requirements:
Has 5+ years of experience in developing code for big data infrastructure. Proficiency in technologies such as: Databricks, Spark, Airflow, Firehose, SQS, or other similar tools.
Proven experience working with high scale on AWS or any other cloud provider. Experience in architecture and design of large-scale and high performance production systems.
Comfortable taking challenges and learning new technologies.
Excellent communication skills with the ability to provide constant dialog between teams.
Ability to take business requirements and translate them to technical alternatives by performing risk management and evaluating tradeoffs.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531695
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Big Data Engineer to develop and integrate systems that retrieve, process and analyze data from around the digital world, generating customer-facing data. This role will report to our Team Manager, R&D.
Why is this role so important?
we are a data-focused company, and data is the heart of our business.
As a big data engineer developer, you will work at the very core of the company, designing and implementing complex high scale systems to retrieve and analyze data from millions of digital users.
Your role as a big data engineer will give you the opportunity to use the most cutting-edge technologies and best practices to solve complex technical problems while demonstrating technical leadership.
Requirements:
Passionate about data.
Holds a BSc degree in Computer Science\Engineering or a related technical field of study.
Has at least 5 years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531681
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Scientist to join our R&D department!
Why is this role so important ?
we are a data-focused company, and our unique AI and machine learning capabilities are the center of our business.
As part of this role, you will create data models that help analyze the petabytes of data we receive from various sources, and research and develop new features and capabilities for our product solutions
As a data scientist, you will work on the very core of the company. Part of your role will be to turn raw data into usable metrics and develop new models and statistical algorithms to support out-of-the box requests from customers who want custom digital insights.
Requirements:
Holds a M.Sc in Computer Science/Mathematics/Physics or any other relevant field - Required
Has 5+ years of experience with statistical and machine learning tools - Python, R, etc.
Strong communicative and verbal abilities to lead and guide customers through the logic of custom-built models
Has previous experience developing machine learning/ image processing/NLP or similar algorithms
Demonstrated experience utilizing probability theory and statistics
Experience with Big Data tools and cloud infrastructure; PySpark, AWS (big advantage)
Strong analytical skills; experience analyzing and processing tabular data, and extracting insights (big advantage)
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531648
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Responsibilities
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.

Preferred Skills (Nice to Have):
Experience with Google Cloud Platform (GCP) .
Experience with Dataplex for data cataloging and governance.
Knowledge of streaming technologies (Kafka, Confluent).
Experience with Looker.
Cloud certifications (Google Professional Data Engineer, Google Cloud Architect).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531425
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Jerusalem
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Design, develop, and maintain scalable, robust data pipelines and ETL processes
Architect and implement complex data models across various storage solutions
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
Ensure data quality, consistency, security, and compliance across all data systems
Play a key role in defining and implementing data strategies that drive business value
Contribute to the continuous improvement of our data architecture and processes
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
Participate in and sometimes lead code reviews to maintain high coding standards
Troubleshoot and resolve complex data-related issues in production environments
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
Extensive experience with GraphDB - MUST
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
Experience with data validation tools such as Pydantic & Great Expectations - Must
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must
Nice-to-Haves
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams
Excellent communication skills with the ability to explain complex technical concepts to various audiences
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531324
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Jerusalem
Job Type: Full Time
We are seeking an experienced Senior Data Platform Engineer to design and scale the robust, cost-efficient infrastructure powering our groundbreaking fraud prevention solution. In this role, you will architect distributed systems and cloud-native technologies to safeguard our clients' revenue while driving technical initiatives that align with business objectives and operational efficiency.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Infrastructure & FinOps: Design scalable, robust backend services while owning cloud cost management to ensure maximum resource efficiency.
High-Performance Engineering: Architect distributed systems and real-time pipelines capable of processing millions of daily transactions.
Operational Excellence: Champion Infrastructure-as-Code (IaC), security, and observability best practices across the R&D organization.
Leadership: Lead technical initiatives, mentor engineers, and drive cross-functional collaboration to solve complex infrastructure challenges.
Requirements:
Experience: 5+ years of experience in data platform engineering, backend engineering, or infrastructure engineering.
Language Proficiency: Specific, strong proficiency in Python & software engineering principles.
Cloud Native: Extensive experience with AWS, GCP, or Azure and cloud-native architectures.
Databases: Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases, including performance optimization, cost tuning, and scaling strategies.
Architecture: Strong experience designing and implementing RESTful APIs, microservices architecture, and event-driven systems.
Containerization & IaC: Experience with containerization technologies (Docker, Kubernetes) and Infrastructure-as-Code (e.g., Terraform, CloudFormation).
System Design: Strong understanding of distributed systems principles, concurrency, and scalability patterns.
Nice-to-Haves
Strong Advantage: Apache Iceberg (Lakehouse/S3/Glue), Apache Spark (Optimization), Message Queues (Kafka/Kinesis), Graph Databases (Experience with schema design, cluster setup, and ongoing management of engines like Amazon Neptune or Neo4j).
Tech Stack: Orchestration (Temporal/Dagster/Airflow), Modern Data Stack (dbt/DuckDB), Streaming (Flink/Kafka Streams), Observability (Datadog/Grafana).
Skills: FinOps (Cost Explorer/Spot instances), CI/CD & DevOps, Data Governance (GDPR), Pydantic, and Mentorship/Leadership experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531320
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Jerusalem
Job Type: Full Time
We're in search of an exceptional hands-on Data Scientist who can bridge data infrastructure, fraud analytics, and machine learning to enhance our fraud detection capabilities. This role requires someone who can drive projects from ideation to implementation, leading these efforts while thinking strategically about how we detect and prevent fraud at scale. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Develop and enhance fraud detection models from concept to production, focusing on data normalization, network analysis, and risk scoring.
Take ownership to identify, analyze, and implement solutions for fraud patterns and behavioral anomalies.
Design, build and maintain data science pipelines and fraud intelligence systems to improve detection accuracy.
Collaborate with product, engineering, and risk teams to implement fraud prevention strategies.
Own projects end-to-end - balancing short-term wins with long-term strategy.
Requirements:
4+ years of hands-on experience in fraud analytics, data science, or risk modeling - Must
Proven experience in the Fintech/Fraud Prevention Domain - Must
Proven ability to take projects from A to Z in building data infrastructure, anomaly detection, and graph-based fraud detection solutions - Must
Strong practical proficiency in Python, SQL, MLOps and fraud-related data tools - Must
Strong background in e-commerce or payments - Advantage
A self-starter mentality - someone who identifies opportunities, takes initiative, and drives improvements with minimal direction.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531316
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Data Analyst position to join the BI Team, with a passion for technology, to solve unique problems within our measurements. Our team solves many challenges in turning raw data into accurate signals and extracting insights to improve the products. Appropriate candidates have an understanding of multiple components across the big data stack: large-scale data analysis, data quality, statistics, and visualization. If you are the kind of researcher who's focused on results and a quick learner, this role is for you. This position is located in our Tel Aviv office.
Responsibilities:
Function as the data authority in the Monetization domain - performing ad-hoc research and analysis on the company's rich data to support unique and strategic business questions.
Define, analyze, and monitor KPIs and OKR metrics to continually improve company products.
Analyze large, complex data sets to address strategic and operational business questions.
Build dashboards and visualizations to answer key questions for the company's in-house various teams, specifically for the Monetization Team.
Perform qualitative, quantitative, and statistical analysis.
True passion for understanding data and finding Insights.
Provide actionable recommendations and conclusions from data to shape features and future product road maps.
Requirements:
At least 3 years experience in data analysis, preferably in a complex business environment.
Experience with User Behavior analysis methods and models: Segmentation, Retention, Ad optimization, User activity funnels, etc.
Strong SQL skills with experience in querying large, complex data sets in with Cloud Data Warehouses (Redshift, Big Query, or Snowflake)
Experience working with BI tools for data visualization (Preferably Looker).
A strong foundation in statistical methods and data modeling.
BA / BSc in Industrial Engineering and Management / Information Systems Engineering / Economics / Statistics / similar background.
Quick learner, proactive/can-do attitude with high attention to detail.
A strong sense of ownership, leading data initiatives from end to end independently.
Ability to work in a fast-paced environment.
Creative and critical thinking skills.
A team player with strong analytical and problem-solving skills as well as a business sense.
Excellent communication skills with fluent English (written and verbal); Strong "story-teller" that can communicate insights and recommendations effectively.
Advantages:
Previous experience as a Monetization or Product Analyst / Close collaboration with similar teams - a big advantage.
Knowledge of Python for data prep, analysis and statistical modelling - a big advantage.
Hands-on practical knowledge in A/B testing to facilitate data-driven decision-making processes.
Knowledge of statistics and experience using statistical packages for analyzing datasets
Experience in a B2C tech company.
Passion for Sport.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8530465
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Petah Tikva
Job Type: Full Time
We're looking for a seasoned Senior Data Scientist to be a hands-on technical powerhouse and strategic driver on our team. You wont just build models; you will personally architect and implement the next generation of our decision engines, owning our most complex modeling challenges from whiteboard to production.

As a senior, hands-on member of the team, you will be a go-to expert. While you'll guide and mentor other data scientists, your primary focus will be on leading by example: executing on high-impact projects that define our competitive edge and directly influence our revenue. You'll contribute to the roadmap, not just follow it.


What Youll Do

Architect & Build: Lead the architectural design and hands-on implementation of our most critical ML models, tackling complex challenges in auction dynamics, user valuation, and more at a massive scale.

Influence Strategy: Contribute to and influence the data science roadmap in collaboration with Product and Engineering, ensuring our efforts are focused on maximum business impact.

Outsmart the Competition: Lead the charge in our high-stakes auction environment where your models edge creates outsized financial returns. You will design and own the experimentation framework to prove your impact.

Elevate Our Technical Bar: Set best practices for modeling and coding, and mentor other data scientists through complex technical challenges, code reviews, and model design sessions.
Requirements:
You have a 4+ year track record of personally driving end-to-end data science projects that delivered significant, measurable business value.

An MSc or PhD in a quantitative field (e.g., Computer Science, Data Science, Mathematics) is a firm requirement for this role.

You have demonstrated experience mentoring data scientists and guiding projects from a technical standpoint, leading by example.

You are a strong communicator and leader, capable of influencing the team's technical direction and clearly articulating complex results to business stakeholders.

You have a deep, intuitive understanding of the Ad-Tech ecosystem, with a strong perspective on auction dynamics and how to exploit market inefficiencies.

You possess expert-level knowledge of machine learning, statistics, and experimental design, paired with expert coding skills in Python and SQL.

You have a "nose for value" and the self-motivation to consistently connect your technical work to commercial outcomes.

Expertise with Spark or other Big Data technologies is essential for success at our scale.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8530075
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו