דרושים » דאטה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 5 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
The Role joins the team acting as the backbone of mavens (Zynga & T2). You will build and scale the mission-critical data infrastructure that powers our products, moving data at high volumes and low latency. This is a hands-on, fast-paced role focused on ingestion, processing, and backend engineering.

What Youll Do

Build & Scale: Design robust, observable external data integration pipelines and backend services.
Manage Data: Maintain data lake/warehouse layers using modern formats (Apache Iceberg, BigQuery).
Model: Design schemas for flexibility, performance, and cost-efficiency.
Collaborate: Work globally with engineers to integrate external APIs and support product needs.
Requirements:
Experience: 3+ years in Data or Backend Engineering working on distributed systems.
Core Stack: Strong Python coding and proficient SQL (optimization/modeling).
Architecture: Deep knowledge of batch processing and event-driven architectures (i.e. Kafka, PubSub).
Tools: Experience with Cloud (GCP/AWS), Orchestration (Airflow), and RDBMS (PostgreSQL, BigQuery, Redshift).
Mindset: You are pragmatic, curious, and excited by real-time challenges and clear communication.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528650
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object store-backed data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8512434
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 10 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
🔧 What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528005
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and visionary Data Platform Engineer to help design, build and scale our BI platform from the ground up.
In this role, you will be responsible for building the foundations of our data analytics platform - enabling scalable data pipelines and robust data modeling to support real-time and batch analytics, ML models and business insights that serve both business intelligence and product needs.
You will be part of the R&D team, collaborating closely with engineers, analysts, and product managers to deliver a modern data architecture that supports internal dashboards and future-facing operational analytics.
If you enjoy architecting from scratch, turning raw data into powerful insights, and owning the full data lifecycle - this role is for you!
Responsibilities
Take full ownership of the design and implementation of a scalable and efficient BI data infrastructure, ensuring high performance, reliability and security.
Lead the design and architecture of the data platform - from integration to transformation, modeling, storage, and access.
Build and maintain ETL/ELT pipelines, batch and real-time, to support analytics, reporting, and product integrations.
Establish and enforce best practices for data quality, lineage, observability, and governance to ensure accuracy and consistency.
Integrate modern tools and frameworks such as Airflow, dbt, Databricks, Power BI, and streaming platforms.
Collaborate cross-functionally with product, engineering, and analytics teams to translate business needs into data infrastructure.
Promote a data-driven culture - be an advocate for data-driven decision-making across the company by empowering stakeholders with reliable and self-service data access.
Requirements:
5+ years of hands-on experience in data engineering and in building data products for analytics and business intelligence.
Strong hands-on experience with ETL orchestration tools (Apache Airflow), and data lakehouses (e.g., Snowflake/BigQuery/Databricks)
Vast knowledge in both batch processing and streaming processing (e.g., Kafka, Spark Streaming).
Proficiency in Python, SQL, and cloud data engineering environments (AWS, Azure, or GCP).
Familiarity with data visualization tools ( Power BI, Looker, or similar.
BSc in Computer Science or a related field from a leading university
Nice to have
Experience working in early-stage projects, building data systems from scratch.
Background in building operational analytics pipelines, in which analytical data feeds real-time product business logic.
Hands-on experience with ML model training pipelines.
Experience in cost optimization in modern cloud environments.
Knowledge of data governance principles, compliance, and security best practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8482840
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/01/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Your Mission As a Senior Data Engineer, your mission is to build the scalable, reliable data foundation that empowers us to make data-driven decisions. You will serve as a bridge between complex business needs and technical implementation, translating raw data into high-value assets. You will own the entire data lifecycle-from ingestion to insight-ensuring that our analytics infrastructure scales as fast as our business.

Key Responsibilities:
Strategic Data Modeling: Translate complex business requirements into efficient, scalable data models and schemas. You will design the logic that turns raw events into actionable business intelligence.
Pipeline Architecture: Design, implement, and maintain resilient data pipelines that serve multiple business domains. You will ensure data flows reliably, securely, and with low latency across our ecosystem.
End-to-End Ownership: Own the data development lifecycle completely-from architectural design and testing to deployment, maintenance, and observability.
Cross-Functional Partnership: Partner closely with Data Analysts, Data Scientists, and Software Engineers to deliver end-to-end data solutions.
Requirements:
What You Bring:
Your Mindset:
Data as a Product: You treat data pipelines and tables with the same rigor as production APIs-reliability, versioning, and uptime matter to you.
Business Acumen: You dont just move data; you understand the business questions behind the query and design solutions that provide answers.
Builders Spirit: You work independently to balance functional needs with non-functional requirements (scale, cost, performance).
Your Experience & Qualifications:
Must Haves:
6+ years of experience as a Data Engineer, BI Developer, or similar role.
Modern Data Stack: Strong hands-on experience with DBT, Snowflake, Databricks, and orchestration tools like Airflow.
SQL & Modeling: Strong proficiency in SQL and deep understanding of data warehousing concepts (Star schema, Snowflake schema).
Data Modeling: Proven experience in data modeling and business logic design for complex domains-building models that are efficient and maintainable.
Modern Workflow: Proven experience leveraging AI assistants to accelerate data engineering tasks.
Bachelors degree in Computer Science, Industrial Engineering, Mathematics, or an equivalent analytical discipline.
Preferred / Bonus:
Cloud Data Warehouses: Experience with BigQuery or Redshift.
Coding Skills: Proficiency in Python for data processing and automation.
Big Data Tech: Familiarity with Spark, Kubernetes, Docker.
BI Integration: Experience serving data to BI tools such as Looker, Tableau, or Superset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8511741
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a talented Data Engineer to join our BI & Data team in Tel Aviv. You will play a pivotal role in building and optimizing the data infrastructure that powers our business. In this mid-level position, your primary focus will be on developing a robust single source of truth (SSOT) for revenue data, along with scalable data pipelines and reliable orchestration processes. If you are passionate about crafting efficient data solutions and ensuring data accuracy for decision-making, this role is for you.



Responsibilities:

Pipeline Development & Integration

- Design, build, and maintain robust data pipelines that aggregate data from various core systems into our data warehouse (BigQuery/Athena), with a special focus on our revenue Single Source of Truth (SSOT).

- Integrate new data sources (e.g. advertising platforms, content syndication feeds, financial systems) into the ETL/ELT workflow, ensuring seamless data flow and consolidation.

- Implement automated solutions for ingesting third-party data (leveraging tools like Rivery and scripts) to streamline data onboarding and reduce manual effort.

- Leverage AI-assisted development tools (e.g., Cursor, GitHub Copilot) to accelerate pipeline development

Optimization & Reliability

- Optimize ETL processes and SQL queries for performance and cost-efficiency - for example, refactoring and cleaning pipeline code to reduce runtime and cloud processing costs.

- Develop modular, reusable code frameworks and templates for common data tasks (e.g., ingestion patterns, error handling) to accelerate future development and minimize technical debt.

- Orchestrate and schedule data workflows to run reliably (e.g. consolidating daily jobs, setting up dependent task flows) so that critical datasets are refreshed on time.

- Monitor pipeline execution and data quality on a daily basis, quickly troubleshooting issues or data discrepancies to maintain high uptime and trust in the data.

Collaboration & Documentation

- Work closely with analysts and business stakeholders to understand data requirements and ensure the infrastructure meets evolving analytics needs (such as incorporating new revenue streams or content cost metrics into the SSOT).

- Document the data architecture, pipeline processes, and data schemas in a clear way so that the data ecosystem is well-understood across the team.

- Continuously research and recommend improvements or new technologies (e.g. leveraging AI tools for data mapping or anomaly detection) to enhance our data platforms capabilities and reliability and ensure our data ecosystem remains a competitive advantage.
Requirements:
4+ years of experience as a Data Engineer (or in a similar data infrastructure role), building and managing data pipelines at scale, with hands-on experience with workflow orchestration and scheduling (Cron, Airflow, or built-in scheduler tools)
Strong SQL skills and experience working with large-scale databases or data warehouses (ideally Google BigQuery or AWS Athena).
Solid understanding of data warehousing concepts, data modeling, and maintaining a single source of truth for enterprise data.
Demonstrated experience in data auditing and integrity testing, with ability to build 'trust-dashboards' or alerts that prove data reliability to executive stakeholders
Proficiency in a programming/scripting language (e.g. Python) for automating data tasks and building custom integrations.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8524462
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our companys data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives our company toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across our company to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases
Ability to work in an office environment a minimum of 3 days a week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8482879
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers - while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects - from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8499899
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Our data engineering team is looking for an experienced professional with expertise in SQL, Python, and strong data modeling skills. In this role, you will be at the heart of our data ecosystem, designing and maintaining cross-engineering initiatives and projects, as well as developing high-quality data pipelines and models that drive decision-making across the organization.

You will play a key role in ensuring data quality, building scalable systems, and supporting cross-functional teams with clean, accurate, and actionable data.



What you will do:

Design, develop, and optimize data services and solutions required to support various company products (like FeatureStore or synthetic data management); Work closely with data analysts, data scientists, engineers, and cross-functional teams to understand data requirements and deliver high-quality solutions.
Design and integrate LLM- and agent-based capabilities into data platforms and services, enabling smarter data operations and AI-driven data products.
Design, develop, and optimize scalable data pipelines to ensure data is clean, accurate, and ready for analysis.
Build and maintain robust data models that support clinical, business intelligence, and operational needs.
Implement and enforce data quality standards, monitoring, and best practices across systems and pipelines.
Manage and optimize large-scale data storage and processing systems to ensure reliability and performance.
Requirements:
5+ years of experience as a Data Engineer / Backend Engineer (with strong emphasis on data processing)
Python Proficiency: Proven ability to build services, solutions, data pipelines, automations, and integration tools using Python.
SQL Expertise: Deep experience in crafting complex queries, optimizing performance, and working with large datasets.
Strong knowledge of data modeling principles and best practices for relational and dimensional data structures.
A passion for maintaining data quality and ensuring trust in business-critical data.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8523783
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
About us:
A pioneering health-tech startup on a mission to revolutionize weight loss and well-being. Our innovative metabolic measurement device provides users with a comprehensive understanding of their metabolism, empowering them with personalized, data-driven insights to make informed lifestyle choices.
Data is at the core of everything we do. We collect and analyze vast amounts of user data from our device and app to provide personalized recommendations, enhance our product, and drive advancements in metabolic health research. As we continue to scale, our data infrastructure is crucial to our success and our ability to empower our users.
About the Role:
As a Senior Data Engineer, youll be more than just a coder - youll be the architect of our data ecosystem. Were looking for someone who can design scalable, future-proof data pipelines and connect the dots between DevOps, backend engineers, data scientists, and analysts.
Youll lead the design, build, and optimization of our data infrastructure, from real-time ingestion to supporting machine learning operations. Every choice you make will be data-driven and cost-conscious, ensuring efficiency and impact across the company.
Beyond engineering, youll be a strategic partner and problem-solver, sometimes diving into advanced analysis or data science tasks. Your work will directly shape how we deliver innovative solutions and support our growth at scale.
Responsibilities:
Design and Build Data Pipelines: Architect, build, and maintain our end-to-end data pipeline infrastructure to ensure it is scalable, reliable, and efficient.
Optimize Data Infrastructure: Manage and improve the performance and cost-effectiveness of our data systems, with a specific focus on optimizing pipelines and usage within our Snowflake data warehouse. This includes implementing FinOps best practices to monitor, analyze, and control our data-related cloud costs.
Enable Machine Learning Operations (MLOps): Develop the foundational infrastructure to streamline the deployment, management, and monitoring of our machine learning models.
Support Data Quality: Optimize ETL processes to handle large volumes of data while ensuring data quality and integrity across all our data sources.
Collaborate and Support: Work closely with data analysts and data scientists to support complex analysis, build robust data models, and contribute to the development of data governance policies.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field.
Experience: 5+ years of hands-on experience as a Data Engineer or in a similar role.
Data Expertise: Strong understanding of data warehousing concepts, including a deep familiarity with Snowflake.
Technical Skills:
Proficiency in Python and SQL.
Hands-on experience with workflow orchestration tools like Airflow.
Experience with real-time data streaming technologies like Kafka.
Familiarity with container orchestration using Kubernetes (K8s) and dependency management with Poetry.
Cloud Infrastructure: Proven experience with AWS cloud services (e.g., EC2, S3, RDS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8510072
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Job Type: Full Time
We use cutting-edge innovations in financial technology to bring leading data and features that allow individuals to be qualified instantly, making purchases at the point-of-sale fast, fair and easy for consumers from all walks of life.
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities
Design and build data solutions that support our companys core business goals, from enabling capital market transactions (loan sales and securitization) to providing
reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481603
סגור
שירות זה פתוח ללקוחות VIP בלבד