דרושים » דאטה » Data Engineer - AI Infra Group

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer - AI Infra Group
Tel Aviv Full-time
The Job
We are on an expedition to find you, someone who is passionate about creating intuitive, out-of-this-world data platforms. You'll architect and ship our streaming lake-house and data platform, turning billions of raw threat signals into high-impact, self-serve insights that protect countries in real time all while building on top-of-the-line technologies, such as Iceberg, Flink, Paimon, Fluss, LanceDB, ClickHouse and more.
Responsibilities
Design and maintain agentic data pipelines that adapt dynamically to new sources, schemas, and AI-driven tasks
Build self-serve data systems that allow teams to explore, transform, and analyze data with minimal engineering effort
Develop modular, event-based pipelines across AWS environments, combining cloud flexibility with custom open frameworks
Automate ingestion, enrichment, and fusion of cybersecurity data including logs, configs, and CTI streams
Collaborate closely with AI engineers and researchers to operationalize LLM and agent pipelines within the CLM ecosystem
Implement observability, lineage, and data validation to ensure reliability and traceability
Scale systems to handle complex, high-volume data while maintaining adaptability and performance
Own the data layer end-to-end including architecture, documentation, and governance.
Requirements:
5+ years of experience building large-scale distributed systems or platforms, preferably in ML or data-intensive environments
Proficiency in Python with strong software engineering practices, familiarity with data structures and design patterns
Deep understanding of orchestration systems (e.g., Kubernetes, Argo) and distributed computing frameworks (e.g., Ray, Spark)
Experience with GPU compute infrastructure, containerization (Docker), and cloud-native architectures
Proven track record of delivering production-grade infrastructure or developer platforms
Solid grasp of ML workflows, including model training, evaluation, and inference pipelines.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8443227
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of Lemonades data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives Lemonade toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions

Collaborate closely with other Staff Engineers across Lemonade to align on cross-organizational initiatives and technical strategies

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas

A B.Sc. in Computer Science or a related technical field (or equivalent experience)

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420751
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and visionary Data Platform Engineer to help design, build and scale our BI platform from the ground up.

In this role, you will be responsible for building the foundations of our data analytics platform enabling scalable data pipelines and robust data modeling to support real-time and batch analytics, ML models and business insights that serve both business intelligence and product needs.

You will be part of the R&D team, collaborating closely with engineers, analysts, and product managers to deliver a modern data architecture that supports internal dashboards and future-facing operational analytics.

If you enjoy architecting from scratch, turning raw data into powerful insights, and owning the full data lifecycle this role is for you!

Responsibilities
Take full ownership of the design and implementation of a scalable and efficient BI data infrastructure, ensuring high performance, reliability and security.

Lead the design and architecture of the data platform from integration to transformation, modeling, storage, and access.

Build and maintain ETL/ELT pipelines, batch and real-time, to support analytics, reporting, and product integrations.

Establish and enforce best practices for data quality, lineage, observability, and governance to ensure accuracy and consistency.

Integrate modern tools and frameworks such as Airflow, dbt, Databricks, Power BI, and streaming platforms.

Collaborate cross-functionally with product, engineering, and analytics teams to translate business needs into data infrastructure.

Promote a data-driven culture be an advocate for data-driven decision-making across the company by empowering stakeholders with reliable and self-service data access.
Requirements:
5+ years of hands-on experience in data engineering and in building data products for analytics and business intelligence.

Proven track record of designing and implementing large-scale data platforms or ETL architectures from the ground up.

Strong hands-on experience with ETL tools and data Warehouse/Lakehouse products (Airflow, Airbyte, dbt, Databricks)

Experience supporting both batch pipelines and real-time streaming architectures (e.g., Kafka, Spark Streaming).

Proficiency in Python, SQL, and cloud data engineering environments (AWS, Azure, or GCP).

Familiarity with data visualization tools like Power BI, Looker, or similar.

BSc in Computer Science or a related field from a leading university
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8423261
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437264
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437886
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Staff Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Staff Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Staff Algo Data Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8439419
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
03/11/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8397812
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were hiring a ML Engineer to accelerate AI-driven innovation across B2B SaaS platform.
Youll be at the forefront of building intelligent systems that power core product experiences and automate internal operations, driving efficiency, speed, and scale across the organization. This is a high-impact, hands-on role in a fast-growing, AI-first company where machine learning is a foundational pillar, not a bolt-on feature. You'll partner with product, engineering, and operations teams to design and implement powerful ML and LLM-based solutions that make a measurable difference.
What You Will Do.:
Build Intelligent Systems: Design and develop ML/LLM-powered solutions that solve real-world challenges across product and internal workflows.
Own Full Lifecycles: Take projects from concept all the way to production, including model training, evaluation, integration, and monitoring.
Leverage State-of-the-Art Tools: Work with leading frameworks like LangChain, Hugging Face, TensorFlow, and PyTorch to deliver cutting-edge functionality.
Collaborate Cross-Functionally: Partner with product managers, engineers, and stakeholders to embed AI capabilities into user-facing features and backend services.
Ship at Scale: Build and maintain scalable APIs and services, integrating best practices in CI/CD, observability, and cloud infrastructure.
Report with Impact: Share progress, challenges, and results clearly with technical and executive stakeholders.
Requirements:
6+ years of experience as a Backend Developer, Data Engineer, or ML Engineer
Bachelors degree in Computer Science or a related STEM field
Strong proficiency in Python and ML tooling
Proven ability to build production-grade ML systems end-to-end
Deep experience with LLMs and ML frameworks (e.g., LangChain, LangGraph, Hugging Face, TensorFlow, PyTorch)
Solid foundation in system design, architecture, and microservice patterns
Excellent problem-solving skills and ownership mindset
Strong collaboration and communication abilities
Bonus if you have:
M.Sc. in Computer Science, Software Engineering, or similar field
Experience building and scaling LLM-powered applications
Familiarity with AWS and DevOps best practices (CI/CD, monitoring, IaC)
Exposure to NoSQL and real-time data processing pipelines
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8435449
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Engineer to join our Data team - someone whos passionate about building reliable, scalable data infrastructure and thrives on solving complex technical challenges.
In this role, youll own the design and development of end-to-end data pipelines that power analytics and data-driven decision-making.
Youll collaborate closely with both business and technical stakeholders to ensure data flows smoothly, accurately, and efficiently across the company.
What You Will Do:
Design, implement, and maintain large-scale ETL and ELT pipelines using modern data frameworks and cloud technologies.
Work with Redshift data warehouses to design efficient schemas and optimize performance.
Build and manage data ingestion processes from multiple sources - APIs, SaaS platforms, internal systems, and databases.
Collaborate with stakeholders to deliver clean, well-modeled, and high-quality data.
Build and evolve a modern, efficient, and scalable data warehouse architecture.
Ensure observability, monitoring, and testing across all data processes.
Apply best practices in CI/CD, version control (Git), and data quality validation.
Requirements:
5+ years of experience as a Data Engineer or ETL Developer, building large-scale data pipelines in a cloud environment (AWS, GCP, or Azure).
Strong SQL expertise, including query optimization and data modeling.
Hands-on experience with ETL/ELT tools such as Matillion, Rivery, SSIS, Talend, or similar.
Solid understanding of data warehouse concepts and dimensional modeling.
Excellent analytical and problem-solving skills.
A collaborative mindset and the ability to work cross-functionally with internal teams.
A self-starter and agile learner who thrives in a fast-paced, dynamic environment.
AI/Data-related development capabilities experience building or integrating AI-driven data solutions is a plus.
Nice to Have:
Experience with Redshift and Matillion - big advantage.
Experience with BI tools such as Qlik or Power BI - big advantage.
Familiarity with CI/CD pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8435478
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced BI Data Engineer to join our Data team within the Information Systems group.
In this role, you will be responsible for building and maintaining scalable, high-quality data pipelines, models, and infrastructure that support business operations across the entire company, with a primary focus on GTM domains.
You will take ownership of core data architecture components, ensuring data consistency, reliability, and accessibility across all analytical and operational use cases.
Your work will include designing data models, orchestrating transformations, developing internal data applications, and ensuring that business processes are accurately represented in the data.
This role requires a combination of deep technical expertise and strong understanding of business operations.
You will collaborate closely with analysts, domain experts, and engineering teams to translate complex business processes into robust, scalable data solutions. If you are passionate about data architecture, building end-to-end data systems, and solving complex engineering challenges that directly impact the business wed love to meet you!
Key Responsibilities:
Design, develop, and maintain end-to-end data pipelines, ensuring scalability, reliability, and performance.
Build, optimize, and evolve core data models and semantic layers that serve as the organizations single source of truth.
Implement robust ETL/ELT workflows using Snowflake, dbt, Rivery, and Python.
Develop internal data applications and automation tools to support advanced analytics and operational needs.
Ensure high data quality through monitoring, validation frameworks, and governance best practices.
Improve and standardize data modeling practices, naming conventions, and architectural guidelines.
Continuously evaluate and adopt new technologies, features, and tooling across the data engineering stack.
Collaborate with cross-functional stakeholders to deeply understand business processes and translate them into scalable technical solutions.
Requirements:
5+ years of experience in BI data engineering, data engineering, or a similar data development role.
Bachelors degree in Industrial Engineering, Statistics, Mathematics, Economics, Computer Science, or a related field required.
Strong SQL expertise and extensive hands-on experience with ETL/ELT development required.
Proficiency with Snowflake, dbt, Python, and modern data engineering workflows essential.
Experience building and maintaining production-grade data pipelines using orchestration tools (e.g., Rivery, Airflow, Prefect) an advantage.
Experience with cloud platforms, CI/CD, or DevOps practices for data an advantage.
Skills and Attributes:
Strong understanding of business processes and the ability to design data solutions that accurately represent real-world workflows.
Strong analytical and problem-solving skills, with attention to engineering quality and performance.
Ability to manage and prioritize tasks in a fast-paced environment.
Excellent communication skills in Hebrew and English.
Ownership mindset, curiosity, and a passion for building high-quality data systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8441718
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a versatile, talented, and highly motivated Data Engineer to join our growing team.

If youre passionate about solving complex problems, thrive in dynamic environments, and love working at the intersection of data engineering, machine learning infrastructure, and AI innovation, this role is for you.

As a Data Engineer, youll play a key role in shaping how data flows through the company, from building scalable pipelines and robust infrastructure to powering data science models and enabling internal teams with intelligent GenAI-powered tools. This is a hands-on, high-impact role with plenty of room for ownership, creativity, and growth.

This is a high-impact role where your work will shape how the company leverages data and AI. If you want to build, innovate, and push boundaries in a collaborative and fast-moving environment, wed love to meet you.

Responsibilities
Own the entire data lifecycle from understanding business needs and building reliable pipelines to ensuring data quality, observability, and performance.
Design, build, and scale modern data infrastructure including data lakes, warehouses, and complex ETL/ELT pipelines.
Integrate and consolidate diverse data sources (CRMs, APIs, databases, SaaS platforms) into a single, trusted source of truth.
Implement and manage CI/CD, observability, and infrastructure-as-code in a cloud-native environment.
Work with the data science team on their ML pipelines, giving data scientists the infrastructure and automation they need to deploy models to production with speed and confidence.
Collaborate with cross-functional teams to embed GenAI agents into business processes, creating smart workflows that boost efficiency and reduce manual work.
Develop frameworks and internal tooling that empower other teams to safely adopt AI and accelerate innovation.
Optimize data infrastructure for performance and cost-efficiency, with a focus on BigQuery optimization.
Ensure high data quality and integrity across large-scale ETL processes. Work closely with analysts, data scientists, and product managers to support data modeling, governance, and analytical initiatives.
Requirements:
5+ years of experience as a Data Engineer.
Strong programming skills in Python and SQL, with a focus on clean, maintainable, production-grade code.
Proven experience building data pipelines with Airflow.
Hands-on experience with modern analytical databases
Experience working with cloud platforms.
Solid knowledge of data modeling, database design, and performance optimization.
Strong problem-solving abilities, analytical mindset, and attention to detail.
Experience working in production-grade environments.
Excellent communication and collaboration skills.
Familiarity with modern CI/CD, observability, and infrastructure-as-code practices.
Experience with Kubernetes, Docker, and Terraform.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8446375
סגור
שירות זה פתוח ללקוחות VIP בלבד