דרושים » דאטה » Senior Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows.
Requirements:
5+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack - Must:
1. Snowflake, Delta Lake, Iceberg, BigQuery, Redshift.
2.Kafka, RabbitMQ, or similar for real-time data processing.
3.Pyspark, Databricks.
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must.
Hands-on experience with Docker and Kubernetes. - Must.
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc).
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8566263
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join our companys AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities that will drive our companys future security solutions.
We foster a hands-on, research-driven culture where youll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.
Key Responsibilities
Your Impact & Responsibilities
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
דרישות:
What You Bring
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks.
Nice to Have המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541065
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
🔧 What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528005
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Responsibilities
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.

Preferred Skills (Nice to Have):
Experience with Google Cloud Platform (GCP) .
Experience with Dataplex for data cataloging and governance.
Knowledge of streaming technologies (Kafka, Confluent).
Experience with Looker.
Cloud certifications (Google Professional Data Engineer, Google Cloud Architect).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531425
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560110
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We act as the central nervous system for engineering, enabling platform teams to unify their stack and expose it as a governed layer through golden paths for developers and AI agents.
By combining rich engineering context, workflows, and actions, we help organizations transition from manual processes to autonomous, AI-assisted engineering workflows while maintaining control and accountability.
As a product-led company, we believe in building world-class platforms that fundamentally shape how modern engineering organizations operate.
What youll do:
Lead the design and development of scalable and efficient data lake solutions that account for high-volume data coming from a large number of sources both pre-determined and custom.
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs.
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into a data lake that will serve our company's users.
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency.
Collaborate with cross-functional teams (product, analytics, and R&D) to enhance our company's data solutions.
Who youll work with:
Youll be joining a collaborative and dynamic team of talented and experienced developers where creativity and innovation thrive.
You'll closely collaborate with our dedicated Product Managers and Designers, working hand in hand to bring our developer portal product to life.
Additionally, you will have the opportunity to work closely with our customers and engage with our product community. Your insights and interactions with them will play an important role to ensure we deliver the best product possible.
Together, we'll continue to empower platform engineers and developers worldwide, providing them with the tools they need to create seamless and robust developer portals. Join us in our mission to revolutionize the developer experience!
Requirements:
5+ years of experience in a Data Engineering role
Expertise in building scalable pipelines and ETL/ELT processes, with proven experience with data modeling
Expert-level proficiency in SQL and experience with large-scale datasets
Strong experience with Snowflake
Strong experience with cloud data platforms and storage solutions such as AWS S3, or Redshift
Hands-on experience with ETL/ELT tools and orchestration frameworks such as Apache Airflow and dbt
Experience with Python and software development
Strong analytical and storytelling capabilities, with a proven ability to translate data into actionable insights for business users
Collaborative mindset with experience working cross-functionally with data engineers and product managers
Excellent communication and documentation skills, including the ability to write clear data definitions, dashboard guides, and metric logic
Advantages:
Experience in NodeJs + Typescript
Experience with streaming data technologies such as Kafka or Kinesis
Familiarity with containerization tools such as Docker and Kubernetes
Knowledge of data governance and data security practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533929
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were seeking an experienced and skilled Data and AI Infra Engineer to join our Data Infrastructure team and drive the companys data capabilities at scale.
As the company is fast growing, the mission of the data and AI infrastructure team is to ensure the company can manage data at scale efficiently and seamlessly through robust and reliable data infrastructure.
A day in the life and how youll make an impact:
As a Senior Engineer, you are required to independently lead the design, development, and optimization of our data infrastructure, collaborating closely with software engineers, data scientists, data engineers, and other key stakeholders. You are expected to own critical initiatives, influence architectural decisions, and mentor engineers to foster a high-performing team
You will:
Lead the design and development of scalable, reliable, and secure data storage, processing, and access systems.
Define and drive best practices for CI/CD processes, ensuring seamless deployment and automation of data services.
Oversee and optimize our machine learning platform for training, releasing, serving, and monitoring models in production.
Own and develop the company-wide LLM infrastructure, enabling teams to efficiently build and deploy projects leveraging LLM capabilities.
Own the company's feature store, ensuring high-quality, reusable, and consistent features for ML and analytics use cases.
Architect and implement real-time event processing and data enrichment solutions, empowering teams with high-quality, real-time insights.
Partner with cross-functional teams to integrate data and machine learning models into products and services.
Ensure that our data systems are compliant with the data governance requirements of our customers and industry best practices.
Mentor and guide engineers, fostering a culture of innovation, knowledge sharing, and continuous improvement.
Requirements:
7+ years of experience in data infra or backend engineering.
Strong knowledge of data services architecture, and ML Ops.
Experience with cloud-based data infrastructure in the cloud, such as AWS, GCP, or Azure.
Deep experience with SQL and NoSQL databases.
Experience with Data Warehouse technologies such as Snowflake and Databricks.
Proficiency in backend programming languages like Python, NodeJS, or an equivalent.
Proven leadership experience, including mentoring engineers and driving technical initiatives.
Strong communication, collaboration, and stakeholder management skills.
Bonus Points:
Experience leading teams working with serverless technologies like AWS Lambda.
Hands-on experience with TypeScript in backend environments.
Familiarity with Large Language Models (LLMs) and AI infrastructure.
Experience building infrastructure for Data Science and Machine Learning.
Experience collaborating with BI developers and analysts to drive business value.
Expertise in administering and managing Databricks clusters.
Experience with streaming technologies such as Amazon Kinesis and Apache Kafka.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8555763
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560108
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Job Type: Full Time
Required Data Infrastructure Engineer
Binyamina & Tel Aviv
About the Role:
We use cutting-edge innovations in financial technology to bring leading data and features that allow individuals to be qualified instantly, making purchases at the point-of-sale fast, fair and easy for consumers from all walks of life.
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities:
Design and build data solutions that support our core business goals, from enabling capital market transactions (loan sales and securitization) to providing
reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have:
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541607
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a talented Data Engineer to join our BI & Data team in Tel Aviv. You will play a pivotal role in building and optimizing the data infrastructure that powers our business. In this mid-level position, your primary focus will be on developing a robust single source of truth (SSOT) for revenue data, along with scalable data pipelines and reliable orchestration processes. If you are passionate about crafting efficient data solutions and ensuring data accuracy for decision-making, this role is for you.



Responsibilities:

Pipeline Development & Integration

- Design, build, and maintain robust data pipelines that aggregate data from various core systems into our data warehouse (BigQuery/Athena), with a special focus on our revenue Single Source of Truth (SSOT).

- Integrate new data sources (e.g. advertising platforms, content syndication feeds, financial systems) into the ETL/ELT workflow, ensuring seamless data flow and consolidation.

- Implement automated solutions for ingesting third-party data (leveraging tools like Rivery and scripts) to streamline data onboarding and reduce manual effort.

- Leverage AI-assisted development tools (e.g., Cursor, GitHub Copilot) to accelerate pipeline development

Optimization & Reliability

- Optimize ETL processes and SQL queries for performance and cost-efficiency - for example, refactoring and cleaning pipeline code to reduce runtime and cloud processing costs.

- Develop modular, reusable code frameworks and templates for common data tasks (e.g., ingestion patterns, error handling) to accelerate future development and minimize technical debt.

- Orchestrate and schedule data workflows to run reliably (e.g. consolidating daily jobs, setting up dependent task flows) so that critical datasets are refreshed on time.

- Monitor pipeline execution and data quality on a daily basis, quickly troubleshooting issues or data discrepancies to maintain high uptime and trust in the data.

Collaboration & Documentation

- Work closely with analysts and business stakeholders to understand data requirements and ensure the infrastructure meets evolving analytics needs (such as incorporating new revenue streams or content cost metrics into the SSOT).

- Document the data architecture, pipeline processes, and data schemas in a clear way so that the data ecosystem is well-understood across the team.

- Continuously research and recommend improvements or new technologies (e.g. leveraging AI tools for data mapping or anomaly detection) to enhance our data platforms capabilities and reliability and ensure our data ecosystem remains a competitive advantage.
Requirements:
4+ years of experience as a Data Engineer (or in a similar data infrastructure role), building and managing data pipelines at scale, with hands-on experience with workflow orchestration and scheduling (Cron, Airflow, or built-in scheduler tools)
Strong SQL skills and experience working with large-scale databases or data warehouses (ideally Google BigQuery or AWS Athena).
Solid understanding of data warehousing concepts, data modeling, and maintaining a single source of truth for enterprise data.
Demonstrated experience in data auditing and integrity testing, with ability to build 'trust-dashboards' or alerts that prove data reliability to executive stakeholders
Proficiency in a programming/scripting language (e.g. Python) for automating data tasks and building custom integrations.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8524462
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Data Engineer to join our Data team,
The Data team develops and maintains the infrastructure for internal data and product analytics,
In this role, you will design and manage complex data pipelines and work closely with data analysts,
software engineers, and other stakeholders to continuously improve data processes and solutions.
What you will do:
Architect, develop, and maintain scalable, end-to-end data pipelines from diverse data sources
Monitor and maintain data systems, ensuring uptime, reliability, and stability
Provide technical expertise and insights to shape overall data strategy and best practices
Strong team player with excellent communication skills.
Requirements:
5+ years of professional experience in data engineering, with a proven track record in building and managing large-scale data pipelines
3+ years experience with Python
Demonstrated expertise in designing and implementing data lake/warehouse solutions
Strong background in ETL processes, data integration, and big data technologies
Proficiency in data modeling, business logic processes, and data warehouse design
Preferred Qualifications
Background in backend development
Experience with Elasticsearch
Familiarity with modern data processing frameworks and tools such as Spark, Kubernetes, Docker
Bachelors degree in computer science, Industrial Engineering or a related analytical discipline (or equivalent experience).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533855
סגור
שירות זה פתוח ללקוחות VIP בלבד