דרושים » דאטה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
one of the world's most impactful shopping recommendation companies, is searching for a talented Data Engineer to join our innovative team. For the past decade, we've served tens of millions of users globally, and we're looking for a passionate individual to help us continue our journey of making an impact.
What You'll Own:
Data Infrastructure
Build and operate data pipelines on GCP using Cloud Workflows, Cloud Functions, Cloud Scheduler, and BigQuery.
Create, Transform and Manage Datasets (Python, DBT,..)
Design and implement API integrations to pull data from affiliate networks, ad platforms, and internal services.
Ensure pipeline reliability, observability (monitoring, alerting, SLA management), and scalability.
Data Quality & Trust
Implement tests, freshness checks, and anomaly detection across all critical models.
Lead incident triage and root-cause analysis when data issues arise; build systems that catch problems before stakeholders do.
Maintain clear documentation of data contracts, schema definitions, and transformation logic.
ML & Data Science Collaboration
Work hand-in-hand with Data Scientists building ML models - ensuring feature tables, training datasets, and inference pipelines are clean and reproducible.
Build the data layer that feeds ML model training and evaluation, including feature stores and labeled datasets.
Requirements:
3-5+ years of Data Engineering or Backend Engineering experience with clear delivery in production systems.
Bachelor's (or higher) in Computer Science, Engineering, Mathematics, or a quantitative field (or equivalent background).
Strong Python skills: writing production-grade ETL/pipeline code, not just scripts.
SQL proficiency: complex window functions, CTEs, query optimization, and BigQuery-flavored SQL.
dbt: modeling patterns, incremental strategies, testing, and documentation.
GCP: hands-on experience with BigQuery, Cloud Functions, Cloud Workflows, or Cloud Scheduler (or equivalent cloud stack with willingness to ramp up quickly).
API integrations: building robust data ingestion from REST APIs with retries, pagination, and error handling.
Data quality mindset: you think about freshness, schema drift, nulls, and SLA before you're asked.
Nice‑to‑Haves:
Experience with affiliate marketing data (Amazon Associates, CJ, Impact, etc.) or ad platform APIs (Meta, Google Ads).
Familiarity with ML pipelines: feature engineering, training data prep, or serving infrastructure.
Event-driven architectures: Pub/Sub, Cloud Run, or equivalent.
Node.js: light scripting or serverless functions (our codebase is primarily Python, with occasional Node).
Version control and CI/CD: Git workflows, automated testing, and deployment pipelines.
Let's evolve together! If you're ready for a role where you can truly make a difference, apply now.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8631103
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from our office.

We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.

This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.

What Youll Do

Architecture & Strategy

Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.

Engineering & Delivery

Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.

Leadership & Collaboration

Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
What Were Looking For:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams.

Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598137
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data & Machine Learning Engineer to operate at the intersection of data platform engineering and machine learning enablement. This role is responsible for building scalable, efficient, and reliable data systems while enabling Data Science and Analytics teams to develop and deploy ML-driven features.

You will take ownership of the data and ML infrastructure layer, ensuring that pipelines, storage models, and compute usage are optimized, while also shaping how data workflows and ML solutions are designed across the organization.


Responsibilities
Data Platform & Infrastructure

Design, build, and maintain scalable data pipelines and storage systems supporting analytics and ML use cases
Ensure compute and cost efficiency across pipelines, storage models, and processing workflows
Own and improve data orchestration, transformation, and serving layers (e.g., Spark, DBT, streaming/batch systems)
Build and maintain shared infrastructure components, including:
IO managers and data access abstractions
Integrations with DBT, Spark, and other data frameworks
Internal tooling to improve developer productivity and reliability
ML Enablement & Collaboration

Partner closely with Data Science to design and productions ML solutions for new features and research initiatives
Translate experimental models into robust, scalable production systems
Support feature engineering, training pipelines, and inference workflows
Help define best practices for ML lifecycle management (training, validation, deployment, monitoring)
Data Quality, Governance & Best Practices

Enforce best practices for building and maintaining data processes across Data Analyst and Data Science teams
Define standards for:
Data modeling and transformations
Pipeline reliability and observability
Testing, versioning, and documentation
Improve data quality, consistency, and discoverability across the organization
Performance & Reliability

Optimize systems for performance, scalability, and cost efficiency
Monitor and troubleshoot data pipelines and ML systems in production
Implement observability (logging, metrics, alerting) across data workflows
Requirements:
Strong programming skills in Python (or similar language)
Proven experience building and maintaining production-grade data pipelines
Hands-on experience with data processing frameworks (e.g., Spark or similar)
Familiarity with DBT or modern data transformation workflows
Experience working with cloud environments (AWS, GCP, or Azure)
Solid understanding of data modeling, distributed systems, and ETL/ELT patterns
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8604541
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
14/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced Data Engineering Team Leader.
In this role, you will lead and strengthen our Data Team, drive innovation, and ensure the robustness of our data and analytics platforms.
A day in the life and how youll make an impact:
Drive the technical strategy and roadmap for the data engineering function, ensuring alignment with overall business objectives.
Own the design, development, and evolution of scalable, high-performance data pipelines to enable diverse and growing business needs.
Establish and enforce a strong data governance framework, including comprehensive data quality standards, monitoring, and security protocols, taking full accountability for data integrity and reliability.
Lead the continuous enhancement and optimization of the data analytics platform and infrastructure, focusing on performance, scalability, and cost efficiency.
Champion the complete data lifecycle, from robust infrastructure and data ingestion to detailed analysis and automated reporting, to maximize the strategic value of data and drive business growth.
Requirements:
5+ years of Data Engineering experience (preferably in a startup), with a focus on designing and implementing scalable, analytics-ready data models and cloud data warehouses (e.g., BigQuery, Snowflake).
Minimum 3 years in a leadership role, with a proven history of guiding teams to success.
Expertise in modern data orchestration and transformation frameworks (e.g., Airflow, DBT).
Deep knowledge of databases (schema design, query optimization) and familiarity with NoSQL use cases.
Solid understanding of cloud data services (e.g., AWS, GCP) and streaming platforms (e.g., Kafka, Pub/Sub).
Fluent in Python and SQL, with a backend development focus (services, APIs, CI/CD).
Excellent communication skills, capable of simplifying complex technical concepts.
Experience with, or strong interest in, leveraging AI and automation for efficiency gains.
Passionate about technology, proactively identifying and implementing tools to enhance development velocity and maintain high standards.
Adaptable and resilient in dynamic, fast-paced environments, consistently delivering results with a strong can-do attitude.
B.Sc. in Computer Science / Engineering or equivalent.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8610119
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object store-backed data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8616791
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and hands-on Data Engineer to lead the migration of enterprise data platforms to Google Cloud Platform (GCP).
In this role, you will design, build and maintain scalable ETL/ELT pipelines, develop advanced data models in BigQuery and contribute to the creation of a high-performance, reliable and cost-efficient data architecture.
You will work closely with analysts, data scientists and engineers and have real impact on how data is consumed across the organization.
What You Will Do:
Lead the migration of data from on-premise core systems to Google Cloud Platform (GCP).
Design and develop processed data layers (Silver and Gold) and data marts in BigQuery, including complex business logic.
Build, orchestrate and maintain data pipelines using Cloud Composer / Apache Airflow.
Develop robust data transformations, including cleansing, enrichment and data quality improvements.
Write efficient and optimized SQL queries in BigQuery with strong focus on performance and cost.
Create and maintain clear and up-to-date technical documentation for data architecture and processes.
Requirements:
3+ years of hands-on experience as a Data Engineer.
Strong experience working with Google Cloud Platform (GCP) - mandatory.
Proven experience with BigQuery, including data modeling, complex SQL and performance optimization - mandatory.
Strong Python skills for ETL/ELT and data transformations.
Experience with orchestration and workflow management tools such as Cloud Composer, Apache Airflow or similar.
Experience working with Cloud Storage (GCS) and additional GCP data services such as Cloud SQL, Data Lakes and storage solutions.
Nice to Have:
Experience with GCP streaming technologies such as Cloud Pub/Sub and Dataflow.
Familiarity with Git and CI/CD processes.
Previous experience migrating data from legacy systems such as Mainframe or Oracle to the cloud.
Personal Skills:
Ability to work independently and lead projects end-to-end.
Proactive mindset with strong technical curiosity and continuous learning attitude.
Strong collaboration skills and ability to work with cross-functional teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595873
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're looking for a Data Warehouse Tech Lead to drive the technical vision and execution of our data infrastructure that powers decision-making across.
You'll lead both the technology and the business coordination for our data warehouse - architecting scalable solutions while working closely with stakeholders and data providers to ensure our platform serves the entire organization's needs. This role combines deep technical leadership with strategic business partnership as we build our next-generation data stack.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role you'll:
Lead technical architecture - design and develop scalable data warehouse solutions that support multiple products and serve the entire organization's analytics needs
Manage the technical roadmap - set strategy and guide execution for the Data Warehouse team, ensuring our platform evolves with business requirements
Drive business process coordination - translate business needs into technical requirements while establishing clear data contracts with R&D, Analytics, and external data providers
Establish and implement best practices - set technical standards for data warehouse architecture, performance tuning, and development methodologies that guide the entire team's approach to building scalable data solutions
Create and maintain sustainable data pipelines - build resilient systems capable of handling unstructured data and managing an evolving schema registry across diverse data sources
Implement advanced data modeling - create robust data structures using methodologies like dimensional modeling, and optimize ETL/ELT processes for our semantic layer
Establish data quality standards - build processes for schema evaluation, anomaly detection, and monitoring data completeness and freshness across all sources
Lead cross-team collaboration - work directly with Data Engineers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers to align technical solutions with business goals
Requirements:
7+ years as a BI Engineer or Data Engineer, with 2+ in a technical leadership or architect role
Proven experience managing complex data warehouses that serve multiple products and entire organizations
Strong expertise in data modeling, ELT development, and data warehouse methodologies
Advanced SQL skills and hands-on experience with Snowflake or similar cloud-native data warehouse platforms
Extensive experience with dbt for data transformation and modeling
Python and software development experience (a strong plus)
Excellent communication skills - you can mentor technical team members and explain complex data concepts to business stakeholders
Ready to work in an office environment most days of the week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594850
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for an experienced Full-Stack Engineer with strong backend expertise to join our growing Sales R&D team in Tel Aviv. This is a backend-leaning role focused on building and maintaining high-performance systems while contributing across full-stack services. As part of our team, youll drive the evolution of our Sales platform, take end-to-end ownership of impactful features, and contribute to an AI-first engineering culture that embraces automation, intelligent tooling, and continuous improvement. You will play a key role in improving product quality, engineering productivity, and system reliability as we support rapid business growth.
Responsibilities:
Design and deliver end-to-end features across backend services and frontend applications, with strong emphasis on backend architecture, data integrity, and system reliability.
Develop and maintain SPA web applications for admin and product platforms using React + TypeScript and Next.js, including contributing to the modernization of legacy Ruby components.
Develop and maintain backend services across Python (FastAPI) and Node.js (NestJS) microservices, ensuring strong performance, robust data management, and ongoing modernization efforts.
Design, implement, and maintain reliable REST APIs with clear contracts, validation, and consistent error handling.
Work with complex relational data models, continuously improving query performance and data reliability.
Lead investigations and root-cause analysis for performance, optimization, and production-related issues, driving long-term improvements.
Collaborate closely with frontend, backend, QA, and product teams in a multi-repo environment, including active code reviews and cross-site collaboration with our Boston engineering team.
Take ownership of code quality and production stability by writing automated tests (e.g., pytest, Jest) and driving issues from investigation through resolution and prevention.
Requirements:
B.Sc. in Computer Science or Software Engineering from a top-tier academic institution (Technion, TAU, BGU, HUJI, or the Open University)
3+ years of professional software engineering experience building and maintaining production web application systems.
Strong backend experience with Python or TypeScript, working within service-oriented architectures.
Experience developing frontend web applications using TypeScript and modern frameworks such as React.
Solid production experience with relational databases, including schema design, query optimization, and performance tuning.
Experience working with AWS and cloud-based infrastructure, and improving developer workflows in multi-repo environments.
Experience designing and maintaining REST APIs used by web applications.
Strong debugging skills and experience troubleshooting performance and reliability issues in production systems.
Proven experience leading the full dev lifecycle: from driving technical design and scaling product requirements, to hands-on execution with high standards for code reviews, documentation, and incremental delivery.
Strong communication skills and an ownership mindset.
Advantages:
Hands-on experience with FastAPI and/or NestJS.
Strong experience with PostgreSQL, including advanced schema design, indexing strategies, and performance optimization.
Experience writing automated tests (unit + integration) and end-to-end testing using tools such as Playwright.
Experience working with observability and monitoring tools such as Coralogix and Sentry.
Experience with performance testing tools (e.g., JMeter, k6) and monitoring stacks (e.g., Prometheus, InfluxDB, Grafana).
Experience working in a monorepo environment with shared code and coordinated cross-project changes.
Experience with Redis, RabbitMQ, or Kafka.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595807
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Data Engineer.
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and ore.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
20718
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8627494
סגור
שירות זה פתוח ללקוחות VIP בלבד