דרושים » דאטה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 13 שעות
חברה חסויה
Location: Merkaz
Job Type: Full Time
we are looking for a Data Engineer.
What youll do:
Design, build, and optimize large-scale data pipelines and workflows for both batch and real-time processing.
Architect and maintain Airflow-based orchestration frameworks to manage complex data dependencies and data movement.
Develop high-quality, maintainable data transformation and integration processes across diverse data sources and domains.
Lead the design and implementation of scalable, cloud-based data infrastructure ensuring reliability, performance, and cost efficiency.
Drive data modeling and data architecture practices to ensure consistency, reusability, and quality across systems.
Collaborate closely with Product, R&D, BizDev, and Data Science teams to define data requirements, integrations, and delivery models.
Own the technical roadmap for key data initiatives, from design to production deployment.
Requirements:
6+ years of experience as a Data Engineer working on large-scale, production-grade systems.
Proven experience architecting and implementing data pipelines and workflows in Airflow - must be hands-on and design-level proficient.
Strong experience with real-time or streaming data processing (Kafka, Event Hubs, Kinesis, or similar).
Advanced proficiency in Python for data processing and automation.
Strong SQL skills and deep understanding of data modeling, ETL/ELT frameworks, and DWH methodologies.
Experience with cloud-based data ecosystems (Azure, AWS, or GCP) and related services (e.g., Snowflake, BigQuery, Redshift).
Experience with Docker, Kubernetes, and modern CI/CD practices.
Excellent communication and collaboration skills with experience working across multiple stakeholders and business units.
A proactive, ownership-driven approach with the ability to lead complex projects end-to-end.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8473165
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an HPE office.
Job Description:
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutionsincluding the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
610+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8461496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and visionary Data Platform Engineer to help design, build and scale our BI platform from the ground up.

In this role, you will be responsible for building the foundations of our data analytics platform enabling scalable data pipelines and robust data modeling to support real-time and batch analytics, ML models and business insights that serve both business intelligence and product needs.

You will be part of the R&D team, collaborating closely with engineers, analysts, and product managers to deliver a modern data architecture that supports internal dashboards and future-facing operational analytics.

If you enjoy architecting from scratch, turning raw data into powerful insights, and owning the full data lifecycle this role is for you!

Responsibilities
Take full ownership of the design and implementation of a scalable and efficient BI data infrastructure, ensuring high performance, reliability and security.

Lead the design and architecture of the data platform from integration to transformation, modeling, storage, and access.

Build and maintain ETL/ELT pipelines, batch and real-time, to support analytics, reporting, and product integrations.

Establish and enforce best practices for data quality, lineage, observability, and governance to ensure accuracy and consistency.

Integrate modern tools and frameworks such as Airflow, dbt, Databricks, Power BI, and streaming platforms.

Collaborate cross-functionally with product, engineering, and analytics teams to translate business needs into data infrastructure.

Promote a data-driven culture be an advocate for data-driven decision-making across the company by empowering stakeholders with reliable and self-service data access.
Requirements:
5+ years of hands-on experience in data engineering and in building data products for analytics and business intelligence.

Proven track record of designing and implementing large-scale data platforms or ETL architectures from the ground up.

Strong hands-on experience with ETL tools and data Warehouse/Lakehouse products (Airflow, Airbyte, dbt, Databricks)

Experience supporting both batch pipelines and real-time streaming architectures (e.g., Kafka, Spark Streaming).

Proficiency in Python, SQL, and cloud data engineering environments (AWS, Azure, or GCP).

Familiarity with data visualization tools like Power BI, Looker, or similar.

BSc in Computer Science or a related field from a leading university
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8423261
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Design, develop, and maintain scalable, robust data pipelines and ETL processes
Architect and implement complex data models across various storage solutions
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
Ensure data quality, consistency, security, and compliance across all data systems
Play a key role in defining and implementing data strategies that drive business value
Contribute to the continuous improvement of our data architecture and processes
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
Participate in and sometimes lead code reviews to maintain high coding standards
Troubleshoot and resolve complex data-related issues in production environments
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
What It Takes - Must haves::
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
Experience with data validation tools such as Pydantic & Great Expectations - Must
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must
Nice-to-Haves:
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams
Excellent communication skills with the ability to explain complex technical concepts to various audiences
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462760
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Job Type: Full Time
Welcome to Chargeflow Chargeflow is at the forefront of fintech + AI innovation, backed by leading venture capital firms. Our mission is to build a fraud-free global commerce ecosystem by leveraging the newest technology, freeing online businesses to focus on their core ideas and growth. We are building the future, and we need you to help shape it. Who We're Looking For - The Dream Maker We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena
* Design, develop, and maintain scalable, robust data pipelines and ETL processes
* Architect and implement complex data models across various storage solutions
* Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
* Ensure data quality, consistency, security, and compliance across all data systems
* Play a key role in defining and implementing data strategies that drive business value
* Contribute to the continuous improvement of our data architecture and processes
* Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
* Participate in and sometimes lead code reviews to maintain high coding standards
* Troubleshoot and resolve complex data-related issues in production environments
* Evaluate and recommend new technologies and methodologies to improve our data infrastructure
Requirements:
What It Takes - Must haves: 5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
* Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
* Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
* Designing and implementing data warehouses and data lakes - Must
* Strong understanding of data modeling techniques - Must
* Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
* Experience with data validation tools such as Pydantic & Great Expectations - Must
* Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must Nice-to-Haves
* Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
* Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
* Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
* Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
* Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
* Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
* Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
* Experience mentoring junior engine
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8397445
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Senior data Engineer to join our dynamic data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure, ensuring the availability, reliability, and quality of our data. This role requires strong technical expertise, problem-solving skills, and the ability to collaborate across teams to deliver data -driven solutions.Key Responsibilities:
* Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes.
* Develop and optimize data models, schemas, and Storage solutions to support analytics and Machine Learning initiatives.
* Collaborate with software engineers and product managers to understand data requirements and deliver high-quality solutions.
* Ensure data quality, integrity, and governance across multiple sources and systems.
* Monitor and troubleshoot data workflows, resolving performance and reliability issues.
* Evaluate and implement new data technologies and frameworks to improve the data platform.
* Document processes, best practices, and data architecture.
* Mentor junior data engineers and contribute to team knowledge sharing.
Requirements:
Required Qualifications:
* Bachelors or Masters degree in Computer Science, Engineering, or a related field.
* 5+ years of experience in data engineering, ETL development, or a similar role.
* Strong proficiency in SQL and experience with relational and NoSQL databases.
* Experience with data pipeline frameworks and tools such as: Apache Spark, Airflow & Kafka. - MUST
* Familiarity with cloud platforms (AWS, GCP, or Azure) and their data services.
* Solid programming skills in Python, JAVA, or Scala.
* Strong problem-solving, analytical, and communication skills.
* Knowledge of data governance, security, and compliance standards.
* Experience with data warehousing, Big Data technologies, and data modeling best practices such as ClickHouse, SingleStore, StarRocks. Preferred Qualifications (Advantage):
* Familiarity with Machine Learning workflows and MLOps practices.
* Work with data Lakehouse architectures and technologies such as Apache Iceberg.
* Experience working with data ecosystems in Open Source/On-Premise environments. Why Join Us:
* Work with cutting-edge technologies and large-scale data systems.
* Collaborate with a talented and innovative team.
* Opportunities for professional growth and skill development.
* Make a direct impact on data -driven decision-making across the organization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8401647
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of Lemonades data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives Lemonade toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions

Collaborate closely with other Staff Engineers across Lemonade to align on cross-organizational initiatives and technical strategies

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas

A B.Sc. in Computer Science or a related technical field (or equivalent experience)

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420751
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced BI Data Engineer to join our Data team within the Information Systems group.
In this role, you will be responsible for building and maintaining scalable, high-quality data pipelines, models, and infrastructure that support business operations across the entire company, with a primary focus on GTM domains.
You will take ownership of core data architecture components, ensuring data consistency, reliability, and accessibility across all analytical and operational use cases.
Your work will include designing data models, orchestrating transformations, developing internal data applications, and ensuring that business processes are accurately represented in the data.
This role requires a combination of deep technical expertise and strong understanding of business operations.
You will collaborate closely with analysts, domain experts, and engineering teams to translate complex business processes into robust, scalable data solutions. If you are passionate about data architecture, building end-to-end data systems, and solving complex engineering challenges that directly impact the business wed love to meet you!
Key Responsibilities:
Design, develop, and maintain end-to-end data pipelines, ensuring scalability, reliability, and performance.
Build, optimize, and evolve core data models and semantic layers that serve as the organizations single source of truth.
Implement robust ETL/ELT workflows using Snowflake, dbt, Rivery, and Python.
Develop internal data applications and automation tools to support advanced analytics and operational needs.
Ensure high data quality through monitoring, validation frameworks, and governance best practices.
Improve and standardize data modeling practices, naming conventions, and architectural guidelines.
Continuously evaluate and adopt new technologies, features, and tooling across the data engineering stack.
Collaborate with cross-functional stakeholders to deeply understand business processes and translate them into scalable technical solutions.
Requirements:
5+ years of experience in BI data engineering, data engineering, or a similar data development role.
Bachelors degree in Industrial Engineering, Statistics, Mathematics, Economics, Computer Science, or a related field required.
Strong SQL expertise and extensive hands-on experience with ETL/ELT development required.
Proficiency with Snowflake, dbt, Python, and modern data engineering workflows essential.
Experience building and maintaining production-grade data pipelines using orchestration tools (e.g., Rivery, Airflow, Prefect) an advantage.
Experience with cloud platforms, CI/CD, or DevOps practices for data an advantage.
Skills and Attributes:
Strong understanding of business processes and the ability to design data solutions that accurately represent real-world workflows.
Strong analytical and problem-solving skills, with attention to engineering quality and performance.
Ability to manage and prioritize tasks in a fast-paced environment.
Excellent communication skills in Hebrew and English.
Ownership mindset, curiosity, and a passion for building high-quality data systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8441718
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Platform Engineer to design, build, and scale next-generation data platform, the backbone powering our AI-driven insights.
This role sits at the intersection of data engineering, infrastructure, and MLOps, owning the architecture and reliability of our data ecosystem end-to-end.
Youll work closely with data scientists,r&d teams, analysts to create a robust platform that supports varying use cases, complex ingestion, and AI-powered analytics.
Responsibilities:
Architect and evolve a scalable, cloud-native data platform that supports batch, streaming, analytics, and AI/LLM workloads across R&D.
Help define and implement standards for how data is modeled, stored, governed, and accessed
Design and build data lakes and data warehouses
Develop and maintain complex, reliable, and observable data pipelines
Implement data quality, validation, and monitoring frameworks
Collaborate with ML and data science teams to connect AI/LLM workloads to production data pipelines, enabling RAG, embeddings, and feature engineering flows.
Manage and optimize relational and non-relational datastores (Postgres, Elasticsearch, vector DBs, graph DBs).
Build internal tools and self-service capabilities that enable teams to easily ingest, transform, and consume data.
Contribute to data observability, governance, documentation, and platform visibility
Drive strong engineering practices
Evaluate and integrate emerging technologies that enhance scalability, reliability, and AI integration in the platform.
Requirements:
7+ years experience building/operating data platforms
Strong Python programming skills
Proven experience with cloud data lakes and warehouses (Databricks, Snowflake, or equivalent).
Data orchestration experience (Airflow)
Solid understanding of AWS services
Proficiency with relational databases and search/analytics stores
Experience designing complex data pipelines, managing data quality, lineage, and observability in production.
Familiarity with CI/CD, GitOps, and IaC
Excellent understanding of distributed systems, data partitioning, and schema evolution.
Strong communication skills, ability to document and present technical designs clearly.
Advantages:
Experience with vector databases and graph databases
Experience integrating AI/LLM workloads into data pipelines (feature stores, retrieval pipelines, embeddings).
Familiarity with event streaming and CDC patterns.
Experience with data catalog, lineage, or governance tools
Knowledge of monitoring and alerting stacks
Hands-on experience with multi-source data product architectures.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8470086
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437264
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were seeking our first Data Engineer to join the Revenue Operations team. This is a high-impact role where youll build the foundations of our data infrastructure connecting the dots between systems, designing and maintaining our data warehouse, and creating reliable pipelines that bring together all revenue-related data. Youll work directly with the Director of Revenue Operations and partner closely with Sales, Finance, and Customer Success.
This is a chance to shape the role from the ground up and create a scalable data backbone that powers smarter decisions across the company.
Role Overview
As the Data Engineer, you will own the design, implementation, and evolution of our data infrastructure. Youll connect core business systems (CRM, finance platforms, billing systems,) into a central warehouse, ensure data quality, and make insights accessible to leadership and revenue teams. Your success will be measured by the accuracy, reliability, and usability of the data foundation you build.
Key Responsibilities
Data Infrastructure & Warehousing
Design, build, and maintain a scalable data warehouse for revenue-related data.
Build ETL/ELT pipelines that integrate data from HubSpot, Netsuite, billing platforms, ACP, and other business tools.
Develop a clear data schema and documentation that can scale as we grow.
Cross-Functional Collaboration
Work closely with Sales, Finance, and Customer Success to understand their reporting and forecasting needs.
Translate business requirements into data models that support dashboards, forecasting, and customer health metrics.
Act as the go-to partner for data-related questions across revenue teams.
Scalability & Optimization
Continuously monitor and optimize pipeline performance and warehouse scalability.
Ensure the infrastructure can handle increased data volume and complexity as the company grows.
Establish and enforce best practices for data quality, accuracy, and security.
Evaluate and implement new tools, frameworks, or architectures that improve automation, speed, and reliability.
Build reusable data models and modular pipelines to shorten development time and reduce maintenance.
Requirements:
46 years of experience as a Data Engineer or in a similar role (preferably in SaaS, Fintech, or fast-growing B2B companies).
Strong expertise in SQL and data modeling; comfort working with large datasets.
Hands-on experience building and maintaining ETL/ELT pipelines (using tools such as Fivetran, dbt, Airflow, or similar).
Experience designing and managing cloud-based data warehouses (Snowflake, BigQuery, Redshift, or similar).
Familiarity with CRM (HubSpot), ERP/finance systems (Netsuite), and billing platforms.
Strong understanding of revenue operations metrics (ARR, MRR, churn, LTV, CAC, etc.).
Ability to translate messy business requirements into clean, reliable data structures.
Solid communication skills able to explain technical concepts to non-technical stakeholders.
What Sets You Apart
Youve been the first data hire before and know how to build from scratch (not a must).
Strong business acumen with a focus on revenue operations.
A builder mindset: you like solving messy data problems and making systems talk.
Comfortable working across teams and translating business needs into data solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8419332
סגור
שירות זה פתוח ללקוחות VIP בלבד