דרושים » דאטה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer
Tel Aviv, Israel
About us:
We are international Multi-Cloud experts, utilizing the power of the cloud for smart digital transformation. With 5 sites over 4 continents around the globe, +450 experts, +1000 customers, and +30 years of proven experience, our mission is to deliver the best Multi-Cloud service to our customers, accelerate their business and help them grow. As tech-savvies, To help our customers stay on top of their game, our teams are constantly developing new strategies and tools that will help them improve cloud performance, spending, visibility, control, and automation. Our cloud experts will make any digital transformation a quick, smart, and easy process
What You'll Do:
Design, build, and maintain data pipelines and infrastructure
Develop and implement data quality checks and monitoring processes
Work with engineers to integrate data into our systems and applications
Collaborate with scientists and analysts to understand their data needs.
Requirements:
3 years of experience as a Data Engineer or a related role
Experience with big data technologies such as Hadoop, Spark, or Elastic Search.
Proven experience in designing, building, and maintaining data pipelines and infrastructure
Service in Unit 8200 or another technology unit- An Advance.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608128
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Staff Software Data Engineer to join our Engineering team and lead the evolution of our next-generation data platform. In this high-impact role, you will operate as a player-coach: you will be the technical visionary responsible for designing the ecosystem, while remaining deeply hands-on to implement scalable, secure, and intelligent solutions that power everything from operational reporting to advanced GenAI applications.

You will bridge the gap between complex business requirements and technical execution, advocating for a data-first culture. This role offers a clear growth path: while it currently starts as an individual contributor position, it has the potential to evolve into a leadership role.

Why join us?

we are the AI-powered platform for finance automation, elevating how finance teams operate in the global economy. We empower our customers to scale faster and smarter by removing the complexities of doing global business and accelerating their finance operations efficiency. Our platform provides a comprehensive suite of finance automation solutions designed for mid-market businesses across accounts payable, global payouts, procurement, employee expenses, corporate cards, supplier management, tax compliance, and treasury. our partners with leading financial institutions such as Citi, Wells Fargo, J.P. Morgan, and Visa, enabling over 5,000 global companies to efficiently and securely pay millions of suppliers and payees across 200+ countries and territories, in 120 currencies.

At our company, we pride ourselves on our collaborative culture, the quality of our product and the capabilities of our people. we are passionate about the work they do, and keen to get the job done. we offer competitive benefits, a flexible workplace, career coaching, and an environment where diverse individuals can thrive and make an impact. Our culture ensures everyone checks their egos at the door and stands ready to reach for success together.

Founded in Israel in 2010, our company is a global business headquartered in the San Francisco Bay Area (Foster City) with offices in Tel Aviv, Plano, Toronto, Vancouver, London, Amsterdam, Tbilisi and Medellin.
About the Role

Architecture & Hands-on Execution: Design and actively build a comprehensive data platform. You will not just oversee infrastructure; you will write the core code and build tools that support diverse workloads-from operational reporting to complex analytical queries.
Strategic & Technical Delivery: Partner with product managers to translate business objectives into technical strategies, then lead the engineering effort to deliver them.
Technology Evaluation: Continuously evaluate, prototype, and select best-in-class technologies to future-proof our data stack.
Technical Leadership & Mentorship: Act as a primary advocate for platform adoption. You will foster a community of practice around data engineering, mentoring senior and mid-level engineers to elevate the team's technical bar.
Governance & Quality: Implement and automate robust frameworks for Data Discovery, Quality, and Governance, ensuring solutions are trustworthy and compliant with financial regulations.
Requirements:
We are looking for a highly motivated Staff Engineer with a strong sense of ownership, eager to tackle technical challenges in a high-throughput data processing environment.

Experience: 8+ years of hands-on experience in Data Engineering and Architecture, with a track record of building and shipping platforms at scale.
Experience with modern big data platforms such as Snowflake, Databricks, or similar technologies.
Hands-on experience with Data infrastructure experience (Orchestration, scalability, reliability, and cloud architecture).
Data Movement & Integration: Deep understanding of data movement strategies, including high-frequency batching, CDC, and real-time event streaming.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585918
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
Job Id: 25380
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture with an AI first attitude.
The ideal candidate is experienced in leveraging AI tools as part of modern data pipeline development, enabling scalable solutions, accelerating delivery, and continuously exploring new approaches and technologies.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.
This is a unique opportunity to join a team full of outstanding people making a big impact on us.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.
This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.
Key Responsibilities
Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
Create data infrastructure for various R&D groups across the organization to support product development and optimization
Work with data experts to assist with technical data-related issues and support infrastructure needs
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
Incorporate AI-assisted tools into data pipeline design, development, and optimization to improve efficiency, scalability, and innovation
Requirements:
B.Sc. in Engineering or a related field
3+ years of experience as a Data Engineer working on production systems
Advanced SQL knowledge and experience with relational databases
Proven experience using Python
Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
Experience in creating and maintaining ETL/ELT processes
Strong project management and organizational skills
Strong collaboration skills with both technical (R&D) and non-technical (business) teams
Experience using AI tools as part of the data engineering workflow, with a mindset of experimentation, working at scale, and exploring new technologies
Advantage: Azure data services, Databricks, EventHub, and Spark.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597003
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object store-backed data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8616791
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our Data Labs (DL) department, which specializes in professional services for our super-premium customers. This role will report to our DL (Datalabs) Data Science Team Manager in the R&D.
Why is this role so important at our company?
we are a data-focused company, and our unique AI and machine learning capabilities are the center of our business.
As part of this role, you will create and support a complex data model pipeline that helps analyze the petabytes of data we receive from various sources, and research and develop new features and capabilities for our product solutions
As a data engineer in the Datalabs team, you will work on the very core of the company. Part of your role will be to create processes that help turn raw data into usable metrics and leverage AI models and statistical algorithms to support out-of-the-box requests from customers who want custom data labs. The Datalabs departments business-oriented nature also means you will be supporting a team of analysts and data scientists who interact directly with customers. Together with them, you will translate the voice of these customers into best-in-class data labs.
So, what will you be doing all day?
Building and maintaining our big-data pipelines
Take a major part in designing and implementing complex high-scale systems using a large variety of technologies
Be part of a team with smart and motivated engineers, and data scientists, to collaborate on the planning, development, and maintenance of our products
Implement solutions in the AWS cloud environment, and work in Databricks with PySpark.
Requirements:
This is the perfect job for someone who:

Holds a BSc degree in Computer Science or equivalent practical experience.
You love building robust, fault-tolerant, and scalable systems and products
You are a go-getter and a team player with a sense of ownership.
Has at least 3+ years of server-side software development experience in one or more general-purpose programming languages (C#, Go, Python, etc.)
Experience building large-scale web APIs: advantage for working with Microservices architecture, AWS, and databases (Redis, PostgreSQL, Firebolt)
Familiarity with Big Data technologies: A familiarity with Spark, Databricks, and Airflow is a big advantage.
Worked in a cloud environment such as AWS or GCP, and is familiar with its different services.
Familiarity with ML pipelines and applications
Familiarity with LLM tools and frameworks.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8590037
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Data Group Tech Lead, Staff Engineer to join our Data Platform group. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role youll:
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
דרישות:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQ המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594845
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're looking for a Data Warehouse Tech Lead to drive the technical vision and execution of our data infrastructure that powers decision-making across.
You'll lead both the technology and the business coordination for our data warehouse - architecting scalable solutions while working closely with stakeholders and data providers to ensure our platform serves the entire organization's needs. This role combines deep technical leadership with strategic business partnership as we build our next-generation data stack.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role you'll:
Lead technical architecture - design and develop scalable data warehouse solutions that support multiple products and serve the entire organization's analytics needs
Manage the technical roadmap - set strategy and guide execution for the Data Warehouse team, ensuring our platform evolves with business requirements
Drive business process coordination - translate business needs into technical requirements while establishing clear data contracts with R&D, Analytics, and external data providers
Establish and implement best practices - set technical standards for data warehouse architecture, performance tuning, and development methodologies that guide the entire team's approach to building scalable data solutions
Create and maintain sustainable data pipelines - build resilient systems capable of handling unstructured data and managing an evolving schema registry across diverse data sources
Implement advanced data modeling - create robust data structures using methodologies like dimensional modeling, and optimize ETL/ELT processes for our semantic layer
Establish data quality standards - build processes for schema evaluation, anomaly detection, and monitoring data completeness and freshness across all sources
Lead cross-team collaboration - work directly with Data Engineers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers to align technical solutions with business goals
Requirements:
7+ years as a BI Engineer or Data Engineer, with 2+ in a technical leadership or architect role
Proven experience managing complex data warehouses that serve multiple products and entire organizations
Strong expertise in data modeling, ELT development, and data warehouse methodologies
Advanced SQL skills and hands-on experience with Snowflake or similar cloud-native data warehouse platforms
Extensive experience with dbt for data transformation and modeling
Python and software development experience (a strong plus)
Excellent communication skills - you can mentor technical team members and explain complex data concepts to business stakeholders
Ready to work in an office environment most days of the week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594850
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
looking for a Data Engineer to help build and scale our analytics data infrastructure. In this role, you will work closely with analysts and business stakeholders to design reliable data models and support the development of a centralized semantic layer used across the company.

You will play a key role in improving the structure, reliability, and usability of our data stack. This includes building and maintaining dbt models, supporting data pipelines, and ensuring analysts have access to clean, well-documented, and consistent data.

This role is ideal for someone who enjoys working at the intersection of data engineering and analytics - translating business needs into scalable data models and enabling teams to move faster with trusted data.

Responsibilities

Design and implement data models that support analytics across key business domains such as GTM, CX, and Finance
Build and maintain transformation workflows using dbt
Work closely with analysts to translate business questions into scalable and reusable data models
Help define and implement a structured semantic layer that enables consistent metrics across the company
Improve the reliability and clarity of the analytics data stack by centralizing logic into well-designed data models
Support the ingestion and transformation of data from various sources using tools such as Fivetran and Airbyte
Contribute to improving data quality, monitoring, and documentation practices
Help establish best practices for analytics modeling and data usage across teams
Actively leverage AI tools (e.g. Cursor, LLM-based assistants) to improve development speed, data modeling, and data workflows
Requirements:
2-4 years of experience in bi/data engineering, analytics engineering or a similar role.
Strong SQL skills and experience working with modern data warehouses.
Experience building and maintaining data models for analytics.
Familiarity with modern data stack tools such as dbt, Snowflake/Bigquery, Fivetran/Rivery, or similar.
Experience collaborating with analysts or BI teams.
Familiarity with Python for data-related tasks (scripting, automation, or tooling).
Hands-on experience using AI tools (e.g. Cursor, LLMs) as part of day-to-day development workflows.
Strong problem-solving skills and the ability to work in evolving data environments.
Clear communicator who can work effectively with both technical and non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595374
סגור
שירות זה פתוח ללקוחות VIP בלבד