דרושים » תוכנה » Data Engineer - Applied AI Engineering Group

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
15/01/2026
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join our companys AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities that will drive our companys future security solutions.
We foster a hands-on, research-driven culture where youll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.
Key Responsibilities
Your Impact & Responsibilities
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
דרישות:
What You Bring
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks.
Nice to Have המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541065
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an Experienced Data Engineer to join our marketing team and take end-to-end ownership of our data platform and production data pipelines. In this role, you will be responsible for building robust, scalable, and observable data systems that power analytics, reporting, and downstream business use cases. You will work deeply hands-on with data infrastructure, modeling, and orchestration, and act as a key technical partner to Marketing, Sales Product and Business and Finance teams.
This role suits someone who enjoys working close to the metal, designing systems that scale, and solving ambiguous data problems in a dynamic startup environment. You will play a critical role in shaping how data flows through the company, setting engineering standards, and ensuring data is trustworthy, performant, and ready for growth.
What You'll Do:
Design, build, and maintain scalable, reliable data pipelines and data warehouse architectures to support analytics and business intelligence needs.
Own the end-to-end ETL/ELT processes - ingesting data from internal and external sources, transforming it, and making it analytics-ready.
Model and optimize data structures (fact tables, dimensions, semantic layers) to support performant querying and reporting.
Ensure high standards of data quality, integrity, observability, and reliability across all data assets.
Partner closely with Analytics, Product, Marketing, and Finance teams to understand data requirements and deliver robust data solutions.
Implement monitoring, alerting, and testing frameworks to proactively identify data issues.
Optimize warehouse performance and cost efficiency (query optimization, partitioning, clustering, etc.).
Identify gaps in data collection and work with engineering teams to improve instrumentation and data availability.
Support experimentation and analytics use cases by enabling clean, trustworthy datasets for A/B testing and analysis.
Document data models, pipelines, and best practices to support scale and knowledge sharing.
Requirements:
Bachelors or Masters degree in Computer Science, Data Engineering, Software Engineering, or a related technical field.
3-5 years of hands-on experience as a Data Engineer, preferably in a SaaS or technology-driven environment.
Strong experience designing and maintaining data warehouses (e.g., Snowflake, BigQuery, Redshift).
Proven expertise with ETL/ELT tools and frameworks (e.g., Airflow, dbt, Talend, SSIS, Informatica, or similar).
Advanced SQL skills and solid proficiency in Python (or similar languages) for data processing and orchestration.
Strong understanding of data modeling, warehousing best practices, and analytics engineering concepts.
Experience integrating data from business systems such as Salesforce, HubSpot, or other SaaS platforms.
Familiarity with SaaS metrics and business concepts (ARR, churn, LTV, CAC) - from a data modeling perspective.
Experience supporting BI tools and analytics consumers (Tableau, Looker, Power BI, etc.).
Strong problem-solving skills, attention to detail, and a passion for building reliable data foundations.
Excellent communication skills and the ability to collaborate across technical and non-technical teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563348
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Responsibilities
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.

Preferred Skills (Nice to Have):
Experience with Google Cloud Platform (GCP) .
Experience with Dataplex for data cataloging and governance.
Knowledge of streaming technologies (Kafka, Confluent).
Experience with Looker.
Cloud certifications (Google Professional Data Engineer, Google Cloud Architect).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531425
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
🔧 What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528005
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases
Ability to work in an office environment a minimum of 3 days a week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8547738
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer.
As a Senior Data Engineer, you will play a key role in shaping and driving our analytics data pipelines and solutions to empower business insights and decisions. Collaborating with a variety of stakeholders, you will design, develop, and optimize scalable, high-performance data analytics infrastructures using modern tools and technologies. Your work will ensure data is accurate, timely, and actionable for critical decision-making.
Key Responsibilities:
Lead the design, development, and maintenance of robust data pipelines and ETL processes, handling diverse structured and unstructured data sources.
Collaborate with data analysts, data scientists, product engineers and product managers to deliver impactful data solutions.
Architect and maintain the infrastructure for ingesting, processing, and managing data in the analytics data warehouse.
Develop and optimize analytics-oriented data models to support business decision-making.
Champion data quality, consistency, and governance across the analytics layer.
Requirements:
5+ years of experience as a Data Engineer or in a similar role.
Expertise in SQL and proficiency in Python for data engineering tasks.
Proven experience designing and implementing analytics-focused data models and warehouses.
Hands-on experience with data pipelines and ETL/ELT frameworks (e.g Airflow, Luigi, AWS Glue, DBT).
Strong experience with cloud data services (e.g., AWS, GCP, Azure).
A deep passion for data and a strong analytical mindset with attention to detail.
Bonus points:
Strong understanding of business metrics and how to translate data into actionable insights
Experience with data visualization tools (e.g., Tableau, Power BI, Looker)
Familiarity with data governance and data quality best practices
Excellent communication skills to work with cross-functional teams including data analysts, data scientists, and product managers
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8555686
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join global data team building the infrastructure that powers analytics and AI-driven insights across the organization. As a Data Engineer, you'll design and maintain scalable data pipelines and systems that enable product analytics, business intelligence, and AI applications in our high-growth fintech environment.
What you'll do:
Design, build, and maintain robust batch and streaming pipelines and orchestration across our modern stack (AWS, DBT, Airbyte, Airflow, Snowflake) to support AI-driven data products.
Develop and optimize data models in Snowflake, ensuring data quality, consistency, and performance at scale.
Collaborate with Product Analysts and AI specialists to implement data solutions for customer segmentation, ranking systems, and predictive models.
Partner with cros-functional teams to translate technical requirements into scalable data architecture.
Implement end-to-end observability (data quality checks, monitoring, alerting) and cost/performance optimization in Snowflake and AWS.
Requirements:
6+ years of experience as a Data Engineer or similar role.
Expert proficiency in SQL and hands-on experience with modern data warehouse platforms (Snowflake - advantage).
Strong experience building ETL/ELT pipelines using tools like DBT, Airflow, or similar orchestration frameworks.
Proficiency in Python or another programming language for data processing.
Solid understanding of data modeling techniques (dimensional modeling, Data Vault, etc).
Experience with cloud platforms, preferably AWS.
Proven ability to design and maintain reliable, scalable data systems with a strong focus on data quality.
Strong communication skills and the ability to work effectively with both technical and business stakeholders.
Experience with fintech, financial services, or cryptocurrency/blockchain - advantage.
Familiarity with real-time data processing or streaming technologies - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569095
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer, Product Analytics
As a Data Engineer, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining us, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
4+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8536011
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We're seeking an outstanding and passionate Data Platform Engineer to join our growing R&D team.

You will work in an energetic startup environment following Agile concepts and methodologies. Joining the company at this unique and exciting stage in our growth journey creates an exceptional opportunity to take part in shaping Finaloop's data infrastructure at the forefront of Fintech and AI.

What you'll do:

Design, build, and maintain scalable data pipelines and ETL processes for our financial data platform.

Develop and optimize data infrastructure to support real-time analytics and reporting.

Implement data governance, security, and privacy controls to ensure data quality and compliance.

Create and maintain documentation for data platforms and processes.

Collaborate with data scientists and analysts to deliver actionable insights to our customers.

Troubleshoot and resolve data infrastructure issues efficiently.

Monitor system performance and implement optimizations.

Stay current with emerging technologies and implement innovative solutions.
Requirements:
What you'll bring:

3+ years experience in data engineering or platform engineering roles.

Strong programming skills in Python and SQL.

Experience with orchestration platforms like Airflow/Dagster/Temporal.

Experience with MPPs like Snowflake/Redshift/Databricks.

Hands-on experience with cloud platforms (AWS) and their data services.

Understanding of data modeling, data warehousing, and data lake concepts.

Ability to optimize data infrastructure for performance and reliability.

Experience working with containerization (Docker) in Kubernetes environments.

Familiarity with CI/CD concepts.

Fluent in English, both written and verbal.

And it would be great if you have (optional):

Experience with big data processing frameworks (Apache Spark, Hadoop).

Experience with stream processing technologies (Flink, Kafka, Kinesis).

Knowledge of infrastructure as code (Terraform).

Experience building analytics platforms.

Experience building clickstream pipelines.

Familiarity with machine learning workflows and MLOps.

Experience working in a startup environment or fintech industry.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8573265
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
16/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Platform Engineer to design, build, and scale our next-generation data platform, the backbone powering our AI-driven insights.
This role sits at the intersection of data engineering, infrastructure, and MLOps, owning the architecture and reliability of our data ecosystem end-to-end.
Youll work closely with data scientists,r&d teams, analysts to create a robust platform that supports varying use cases, complex ingestion, and AI-powered analytics.
Responsibilities:
Architect and evolve a scalable, cloud-native data platform that supports batch, streaming, analytics, and AI/LLM workloads across R&D.
Help define and implement standards for how data is modeled, stored, governed, and accessed
Design and build data lakes and data warehouses
Develop and maintain complex, reliable, and observable data pipelines
Implement data quality, validation, and monitoring frameworks
Collaborate with ML and data science teams to connect AI/LLM workloads to production data pipelines, enabling RAG, embeddings, and feature engineering flows.
Manage and optimize relational and non-relational datastores (Postgres, Elasticsearch, vector DBs, graph DBs).
Build internal tools and self-service capabilities that enable teams to easily ingest, transform, and consume data.
Contribute to data observability, governance, documentation, and platform visibility
Drive strong engineering practices
Evaluate and integrate emerging technologies that enhance scalability, reliability, and AI integration in the platform.
Requirements:
7+ years experience building/operating data platforms
Strong Python programming skills
Proven experience with cloud data lakes and warehouses (Databricks, Snowflake, or equivalent).
Data orchestration experience (Airflow)
Solid understanding of AWS services
Proficiency with relational databases and search/analytics stores
Experience designing complex data pipelines, managing data quality, lineage, and observability in production.
Familiarity with CI/CD, GitOps, and IaC
Excellent understanding of distributed systems, data partitioning, and schema evolution.
Strong communication skills, ability to document and present technical designs clearly.
Advantages:
Experience with vector databases and graph databases
Experience integrating AI/LLM workloads into data pipelines (feature stores, retrieval pipelines, embeddings).
Familiarity with event streaming and CDC patterns.
Experience with data catalog, lineage, or governance tools
Knowledge of monitoring and alerting stacks
Hands-on experience with multi-source data product architectures.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8548254
סגור
שירות זה פתוח ללקוחות VIP בלבד