דרושים » דאטה » Senior Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
We are looking for a highly enthusiastic Data Engineer to join our group on the journey of leveraging Big Data to revolutionize our offering, business operations and decision making across the companys ecosystem.
Job Responsibilities:
Design, build & deploy backend data solutions to prod, starting from research and design to development and testing.
Work closely with data engineers, product, architects and other R&D teams to deliver the best solutions to the business.
Monitor the solutions in production to make sure they are fully stable, scalable and performant at all times.
Work closely with data sciences experts
Requirements:
At least 3 Years of proven hands on experience with big data solutions and frameworks in production (Spark, Flink) - mandatory
Proven ability of writing complex SQL queries - mandatory
Strong analytical and problem-solving skills with attention to details - mandatory
Production grade experience of writing spark applications using Scala or Java - mandatory
Experience with Apache Airflow and AWS tools (EMR, Glue, Athena) - a big advantage
Solid knowledge in Python and Linux operating systems - a big advantage
Experience with Clickhouse - a big advantage
Familiarity with the Ad Tech industry and RTB - an advantage
Extensive experience in Functional programming, Unit testing/TDD, continuous Deployment - an advantage
Familiarity with NodeJS - an advantage
Fluent verbal and written English skills required
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8614303
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
Job Id: 25380
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture with an AI first attitude.
The ideal candidate is experienced in leveraging AI tools as part of modern data pipeline development, enabling scalable solutions, accelerating delivery, and continuously exploring new approaches and technologies.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.
This is a unique opportunity to join a team full of outstanding people making a big impact on us.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.
This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.
Key Responsibilities
Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
Create data infrastructure for various R&D groups across the organization to support product development and optimization
Work with data experts to assist with technical data-related issues and support infrastructure needs
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
Incorporate AI-assisted tools into data pipeline design, development, and optimization to improve efficiency, scalability, and innovation
Requirements:
B.Sc. in Engineering or a related field
3+ years of experience as a Data Engineer working on production systems
Advanced SQL knowledge and experience with relational databases
Proven experience using Python
Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
Experience in creating and maintaining ETL/ELT processes
Strong project management and organizational skills
Strong collaboration skills with both technical (R&D) and non-technical (business) teams
Experience using AI tools as part of the data engineering workflow, with a mindset of experimentation, working at scale, and exploring new technologies
Advantage: Azure data services, Databricks, EventHub, and Spark.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597003
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Data Group Tech Lead, Staff Engineer to join our Data Platform group. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role youll:
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
דרישות:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQ המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594845
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Were looking for a Senior Data Architect to help shape how high-growth startups build and scale on AWS. In this role, youll design and deliver end-to-end data and analytics solutions - from architecture and pipelines to visualization and insights - guiding customers from concept through production. Youll work closely with startup founders, technical leaders, and account executives to create scalable, cost-efficient architectures that drive real business impact.
Work location - hybrid from Tel Aviv
If you are interested in this opportunity, please submit your CV in English.
Key Responsibilities
Design, develop, and implement data & analytics solutions to meet business requirements and create cost-efficient, highly available, and scalable customer solutions, including Well-Architected reviews and SoW.
Research and analyze current solutions and initiate improvement plans.
Collaborate with other engineers and stakeholders to ensure solutions are designed and developed according to best practices.
Lead workshops, POCs, and architecture reviews with startup customers, conferences, webinars, and more.
Stay up to date on Data Engineering and Analytics trends and contribute to internal enablement.
Frequent travels - locally (on-demand to meet with customers and partners and attend local events) and abroad (at least once a quarter).
Requirements:
3+ years of hands-on experience in AWS, including solution design, migration, and maintenance
2+ years in customer-facing technical roles (e.g., SRE, Cloud Architect, Customer Engineer)
Production experience with AWS infrastructure, data services, and real-time data processing
Proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, DynamoDB)
Skilled in AWS analytics tools (Glue, Athena, Redshift, EMR, Kinesis, MSK, QuickSight, dbt)
Understanding of information security best practices
Strong verbal and written communication in English and local language
Ability to lead end-to-end technical engagements and work in fast-paced environments
AWS Solutions Architect - Associate certification
Experience with Iceberg- an advantage
Experience with Kubernetes, CI/CD, and DevOps tools - an advantage
Experience with ETL processes, data lakes, and pipelines - an advantage
Experience writing SOWs, HLDs, and effort estimates - an advantage
AWS Professional or Data Analytics/Data Engineer certifications - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599151
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an office.

We are looking for a talented Data Engineer to help build and enhance the data platform that supports analytics, operations, and data-driven decision-making across the organization. You will work hands-on to develop scalable data pipelines, improve data models, ensure data quality, and contribute to the continuous evolution of our modern data ecosystem.

Youll collaborate closely with Senior Engineers, Analysts, Data Scientists, and stakeholders across the business to deliver reliable, well-structured, and well-governed data solutions.


What Youll Do:

Engineering & Delivery

Build, maintain, and optimize data pipelines for batch and streaming workloads.

Develop reliable data models and transformations to support analytics, reporting, and operational use cases.

Integrate new data sources, APIs, and event streams into the platform.

Implement data quality checks, testing, documentation, and monitoring.

Write clean, performant SQL and Python code.

Contribute to improving performance, scalability, and cost-efficiency across the data platform.

Collaboration & Teamwork

Work closely with senior engineers to implement architectural patterns and best practices.

Collaborate with analysts and data scientists to translate requirements into technical solutions.

Participate in code reviews, design discussions, and continuous improvement initiatives.

Help maintain clear documentation of data flows, models, and processes.

Platform & Process

Support the adoption and roll-out of new data tools, standards, and workflows.

Contribute to DataOps processes such as CI/CD, testing, and automation.

Assist in monitoring pipeline health and resolving data-related issues.
Requirements:
What Were Looking For

2-5+ years of experience as a Data Engineer or similar role.

Hands-on experience with Snowflake (mandatory)-including SQL, modeling, and basic optimization.

Experience with dbt (or similar)-model development, tests, documentation, and version control workflows.

Strong SQL skills for data modeling and analysis.

Proficiency with Python for pipeline development and automation.

Experience working with orchestration tools (Airflow, Dagster, Prefect, or equivalent).

Understanding of ETL/ELT design patterns, data lifecycle, and data modeling best practices.

Familiarity with cloud environments (AWS, GCP, or Azure).

Knowledge of data quality, observability, or monitoring concepts.

Good communication skills and the ability to collaborate with cross-functional teams.


Nice to Have:

Exposure to streaming/event technologies (Kafka, Kinesis, Pub/Sub).

Experience with data governance or cataloging tools.

Basic understanding of ML workflows or MLOps concepts.

Experience with infrastructure-as-code tools (Terraform, CloudFormation).

Familiarity with testing frameworks or data validation tools.

Additional Skills:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, User Experience (UX).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598093
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers - while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Job responsibilities:
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects - from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills:
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602206
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603230
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced Data Engineer to join our DataWarehouse team in TLV.
In this role, you will play a pivotal role in the Data Platform organization, leading the design, development, and maintenance of our data warehouse. In your day-to-day, youll work on data models and Backend BI solutions that empower stakeholders across the company and contribute to informed decision-making processes all while leveraging your extensive experience in business intelligence.
This is an excellent opportunity to be part of establishing our state-of-the-art data stack, implementing cutting-edge technologies in a cloud environment.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role youll:
Lead the design and development of scalable and efficient data warehouse and BI solutions that align with organizational goals and requirements
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into the semantic layer
Develop processes to enforce schema evaluation, cover anomaly detection, and monitor data completeness and freshness
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency
Implement best practices for data warehouse and database performance tuning
Conduct thorough testing of data applications and implement robust validation processes
Collaborate with Data Infra Engineers, Developers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers
Requirements:
3+ years of experience as a BI Engineer or Data Engineer
Proficiency in data modeling, ELT development, and DWH methodologies
SQL expertise and experience working with Snowflake or similar technologies
Prior experience working with DBT
Experience with Python and software development, an advantage
Excellent communication and collaboration skills
Ready to work in an office environment most days of the week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594839
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required ML Data Engineer
Israel: Tel Aviv/ Hybrid (Israel)
R&D | Full Time | Job Id: 24792
Key Responsibilities
Your Impact & Responsibilities:
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
Requirements:
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.

Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks. 
Nice to Have 
Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines. 
Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases). 
Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments. 
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8597480
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and hands-on Data Engineer to lead the migration of enterprise data platforms to Google Cloud Platform (GCP).
In this role, you will design, build and maintain scalable ETL/ELT pipelines, develop advanced data models in BigQuery and contribute to the creation of a high-performance, reliable and cost-efficient data architecture.
You will work closely with analysts, data scientists and engineers and have real impact on how data is consumed across the organization.
What You Will Do:
Lead the migration of data from on-premise core systems to Google Cloud Platform (GCP).
Design and develop processed data layers (Silver and Gold) and data marts in BigQuery, including complex business logic.
Build, orchestrate and maintain data pipelines using Cloud Composer / Apache Airflow.
Develop robust data transformations, including cleansing, enrichment and data quality improvements.
Write efficient and optimized SQL queries in BigQuery with strong focus on performance and cost.
Create and maintain clear and up-to-date technical documentation for data architecture and processes.
Requirements:
3+ years of hands-on experience as a Data Engineer.
Strong experience working with Google Cloud Platform (GCP) - mandatory.
Proven experience with BigQuery, including data modeling, complex SQL and performance optimization - mandatory.
Strong Python skills for ETL/ELT and data transformations.
Experience with orchestration and workflow management tools such as Cloud Composer, Apache Airflow or similar.
Experience working with Cloud Storage (GCS) and additional GCP data services such as Cloud SQL, Data Lakes and storage solutions.
Nice to Have:
Experience with GCP streaming technologies such as Cloud Pub/Sub and Dataflow.
Familiarity with Git and CI/CD processes.
Previous experience migrating data from legacy systems such as Mainframe or Oracle to the cloud.
Personal Skills:
Ability to work independently and lead projects end-to-end.
Proactive mindset with strong technical curiosity and continuous learning attitude.
Strong collaboration skills and ability to work with cross-functional teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595873
סגור
שירות זה פתוח ללקוחות VIP בלבד