רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Staff Software Data Engineer to join our Engineering team and lead the evolution of our next-generation data platform. In this high-impact role, you will operate as a player-coach: you will be the technical visionary responsible for designing the ecosystem, while remaining deeply hands-on to implement scalable, secure, and intelligent solutions that power everything from operational reporting to advanced GenAI applications.

You will bridge the gap between complex business requirements and technical execution, advocating for a data-first culture. This role offers a clear growth path: while it currently starts as an individual contributor position, it has the potential to evolve into a leadership role.

Why join us?

we are the AI-powered platform for finance automation, elevating how finance teams operate in the global economy. We empower our customers to scale faster and smarter by removing the complexities of doing global business and accelerating their finance operations efficiency. Our platform provides a comprehensive suite of finance automation solutions designed for mid-market businesses across accounts payable, global payouts, procurement, employee expenses, corporate cards, supplier management, tax compliance, and treasury. our partners with leading financial institutions such as Citi, Wells Fargo, J.P. Morgan, and Visa, enabling over 5,000 global companies to efficiently and securely pay millions of suppliers and payees across 200+ countries and territories, in 120 currencies.

At our company, we pride ourselves on our collaborative culture, the quality of our product and the capabilities of our people. we are passionate about the work they do, and keen to get the job done. we offer competitive benefits, a flexible workplace, career coaching, and an environment where diverse individuals can thrive and make an impact. Our culture ensures everyone checks their egos at the door and stands ready to reach for success together.

Founded in Israel in 2010, our company is a global business headquartered in the San Francisco Bay Area (Foster City) with offices in Tel Aviv, Plano, Toronto, Vancouver, London, Amsterdam, Tbilisi and Medellin.
About the Role

Architecture & Hands-on Execution: Design and actively build a comprehensive data platform. You will not just oversee infrastructure; you will write the core code and build tools that support diverse workloads-from operational reporting to complex analytical queries.
Strategic & Technical Delivery: Partner with product managers to translate business objectives into technical strategies, then lead the engineering effort to deliver them.
Technology Evaluation: Continuously evaluate, prototype, and select best-in-class technologies to future-proof our data stack.
Technical Leadership & Mentorship: Act as a primary advocate for platform adoption. You will foster a community of practice around data engineering, mentoring senior and mid-level engineers to elevate the team's technical bar.
Governance & Quality: Implement and automate robust frameworks for Data Discovery, Quality, and Governance, ensuring solutions are trustworthy and compliant with financial regulations.
Requirements:
We are looking for a highly motivated Staff Engineer with a strong sense of ownership, eager to tackle technical challenges in a high-throughput data processing environment.

Experience: 8+ years of hands-on experience in Data Engineering and Architecture, with a track record of building and shipping platforms at scale.
Experience with modern big data platforms such as Snowflake, Databricks, or similar technologies.
Hands-on experience with Data infrastructure experience (Orchestration, scalability, reliability, and cloud architecture).
Data Movement & Integration: Deep understanding of data movement strategies, including high-frequency batching, CDC, and real-time event streaming.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585918
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
19/03/2026
Location: More than one
Job Type: Full Time
We are looking for an expert Data Engineer to build and evolve the data backbone for our R&D telemetry and performance analytics ecosystem. Responsibilities include processing raw, large quantities of data from live systems at the cluster level: hardware, communication units, software, and efficiency indicators. Youll be part of a fast paced R&D organization, where system behavior, schemas, and requirements evolve constantly. Your mission is to develop flexible, reliable, and scalable data handling pipelines that can adapt to rapid change and deliver clean, trusted data for engineers and researchers.

What youll be doing:

Build flexible data ingestion and transformation frameworks that can easily handle evolving schemas and changing data contracts.

Develop and maintain ETL/ELT workflows for refining, enriching, and classifying raw data into analytics-ready form.

Collaborate with R&D, hardware, DevOps, ML engineers, data scientists and performance analysts to ensure accurate data collection from embedded systems, firmware, and performance tools.

Automate schema detection, versioning, and validation to ensure smooth evolution of data structures over time.

Maintain data quality and reliability standards, including tagging, metadata management, and lineage tracking.

Enable self-service analytics by providing curated datasets, APIs, and Databricks notebooks.
Requirements:
What we need to see:

B.Sc. or M.Sc. in Computer Science, Computer Engineering, or a related field.

5+ years of experience in data engineering, ideally in telemetry, streaming, or performance analytics domains.

Confirmed experience with Databricks and Apache Spark (PySpark or Scala).

Understanding of streaming processes and their applications (e.g., Apache Kafka for ingestion, schema registry, event processing).

Proficiency in Python and SQL for data transformation and automation.

Shown knowledge in schema evolution, data versioning, and data validation frameworks (e.g., Delta Lake, Great Expectations, Iceberg, or similar).

Experience working with cloud platforms (AWS, GCP, or Azure) - AWS preferred.

Familiarity with data orchestration tools (Airflow, Prefect, or Dagster).

Experience handling time-series, telemetry, or real-time data from distributed systems.

Ways to stand out from the crowd:

Exposure to hardware, firmware, or embedded telemetry environments.

Knowledge of real-time analytics frameworks (Spark Structured Streaming, Flink, Kafka Streams).

Understanding of system performance metrics (latency, throughput, resource utilization).

Experience with data cataloging or governance tools (DataHub, Collibra, Alation).

Familiarity with CI/CD for data pipelines and infrastructure-as-code practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585234
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are a data driven company, the Data Engineering team is the bridge between raw data and impactful business insights. You will lead a team of talented engineers to architect, build, and maintain the data infrastructure of our company.
Your mission is to ensure our data is reliable, secure, and ready for everything from standard BI reporting to cutting-edge Generative AI applications. Youll act as a key partner to our business teams, translating high-level needs into technical reality and driving a massive impact on our bottom line. In addition youll spend time mentoring your team as well as tackling high-level architecture decisions, pushing technical strategy and contributing to the codebase.
What am I going to do?
Lead a team that designs and scales real-time data pipelines processing millions of events hourly.
Utilize cutting-edge technologies to build scalable and high performant data infrastructure.
Influence and drive architecture and data strategy decisions.
Work closely with other stakeholders such as Data Developers, Analysts, Data Science and R&D.
Hands-on development of data infrastructure tools (~50% of the time).
Lead and mentor the team talent.
Finops and governance of data infrastructure domains.
Requirements:
5+ years of hands-on experience in data engineering, with at least 2 years in a team leadership role.
Strong programming skills in Python.
Experience with relevant technologies such as BigQuery, Airflow/Prefect, DBT, Kafka, Athena, BI Tools.
Proven track record of design and implementation of highly scalable distributed data pipelines.
Deep understanding of data modeling, ETL and real-time analytics.
Great 360 collaboration with excellent communication skills.
MSc./BSc. in Computer Sciences from a top university.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8582750
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/03/2026
מיקום המשרה: מרכז
סוג משרה: משרה מלאה
אנחנו מחפשים Data Engineer מנוסה ובעל יכולות אנליטיות חזקות, שיצטרף אלינו ויוביל את
דרישות:
אנחנו מחפשים Data
Engineer מנוסה ובעל יכולות אנליטיות חזקות, שיצטרף אלינו ויוביל את
בניית תשתיות הדאטה המאפשרות פיתוח, אימון והסקה של מודלים חכמים בקנה מידה גדול. התפקיד
דורש הבנה עמוקה של התמונה העסקית–מערכתית, ויכולת להוביל תהליכי דאטה מקצה לקצה
בסביבה טכנולוגית מתקדמת. במסגרת התפקיד תהיה אחריות מלאה על הקמת שכבות דאטה, יצירת
פאנלים לאימון ולהסקה, פיתוח תהליכי ETL/ELT, וחיבור למקורות מידע
ארגוניים שונים תוך הבנה מעמיקה של הצרכים המודליים והעסקיים. תחומי אחריות,מה התפקיד כולל ?1. בניית פאנלים ותשתיות דאטה למודלים (ML Data Panels and Pipelines)

• פיתוח שכבות דאטה איכותיות ואמינות לתהליכי אימון (Training) והסקה (Inference).

• יצירת פאנלים דינמיים המגיבים ל־request מהמערכת בזמן אמת.

• ביצוע Feature Enrichment ויצירת וקטורי פיצ'רים מותאמים למודל בהתאם ל־request.

2. עבודה מול מקורות מידע ארגוניים

• חיבור למערכות ולמקורות מידע שונים ויכולת לאסוף טבלאות ושדות רלוונטיים בצורה יעילה וחסכונית.

• הבנה מעמיקה של ה־ Business Logic והצרכים המודליים לצורך תכנון נכון של שכבות הדאטה ותהליכי עיבוד.

3. יכולת אנליטית גבוהה ופתרון בעיות

• ביצוע ניתוחים וחקר דאטה ברמה גבוהה.

• פתרון בעיות מורכבות בצורה עצמאית ובהסתכלות מערכתית.

• הבנה טובה של צורכי ה־ML וסביבת ה־Production.

4. פיתוח ושגרות Data Engineering

• פיתוח ושיפור תהליכי ETL/ELT בסביבת Snowflake.

• שימוש/היכרות עם מתודולוגיות וכלים מודרניים:

o DBT

o Feature Store - יתרון משמעותי

o AWS Glue - יתרון

• יישום בקרות איכות, ניטור הדאטה ושיפור אמינות ורציפות המידע.

5. עבודה בממשקים רוחביים

• עבודה בממשק ישיר ויומיומי עם Data Scientists, אנליסטים, MLOps וארכיטקטים.

• הובלת תהליכי דאטה חוצי ארגון והתאמת פתרונות לצרכים טכנולוגיים ועסקיים. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8582080
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/03/2026
Location: Merkaz
Job Type: Full Time
abra professional services is seeking for an senior data engineer. We are seeking a Senior Data Engineer to lead the technical development of our Data Fabric within a military program. This role involves technical leadership in building contextual data pipelines, working with big data lakehouse architectures, and driving data pipeline automation.
Requirements:
MUST HAVE Requirements: 5+ years of hands-on experience as a Big Data Engineer
* Proven ability to design and deploy Big Data & AI solutions using frameworks and tools such as:
* Apache NiFi
* Kafka
* Spark
* Graph DB
* NoSQL databases Strong coding proficiency in Python and Bash
* Experience building large-scale data pipelines for data processing, transformation, and analytics
* Ability to define and manage metadata schemas
* Experience developing data workflows (e.g., metadata generation, auto-extraction, validation, format conversion)
* Experience developing tools for data observability and compliance analytics
* Ability to analyze and report on complex datasets and results
* Experience working with big data lakehouse architectures
* Ability to explore, enhance, and manage data fusion from multiple data sources
* Experience developing analytics tools, algorithms, and programs
* Technical leadership of data structure design for contextual pipelines
* Collaboration experience with Data Scientists and Architects
* Strong ability to work independently, learn new technologies, and apply innovative solutions
* Excellent interpersonal skills and ability to work as part of a team MSc in Computer Science or Data Science Advantages:
* Experience with managing metadata and taxonomy concepts alongside big data objects
* Experience working with public cloud Big Data & AI services
* Familiarity with machine learning concepts
* Strong technical skills
* Independent, hands-on developer
* Curious, open-minded, fast learner, and a strong team player
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8503268
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Petah Tikva
Job Type: Full Time
We are looking for a highly skilled Data Engineer . to build and maintain robust, scalable data pipelines and data marts acting as the connective tissue for intelligence insights generation that serves executive stakeholders , internal customers and 3rd party
The Fintech AI & Data group is looking for a staff Data Engineer to work closely with analysts, data scientists, and software developers and strengthen Fintech by building data capabilities and AI transformation.
Responsibilities:
Gather data needs from internal customers like product and analysts, and translate those requirements into a working database and analytic software.
Design, build, and maintain scalable, reliable batch and real time data pipelines, data marts and warehouse supporting executive dashboards, operational analytics, and internal customer use cases
Ensure high data quality, observability, reliability, and governance across all data assets
Optimize data models for performance, cost-efficiency, and scalability
Develop data-centric software using leading-edge big data technologies.
Build data capabilities that enable automated agentic insights and decision intelligence
Develop reusable data services and APIs that power AI-driven workflows
Evolve our data architecture into an AI-native data layer designed to power LLMs, AI agents, and intelligent applications
Collaborate with analytics, product, and AI teams to translate business needs into scalable data solutions
Influence the software architecture and working procedures for building data and analytics
Work bBe the go-to person for anything and everything regarding understanding the data - exploration, pipelines, analytics, etc. and work both independently and as part of a team
How youll succeed
Have an impact on satisfying customers and reducing financial fraud
Help build the team by hiring the best talent
Contribute toexperiments and research on how to enhance our capabilities
Learn new technologies and methodologies
Collaborate with other data engineers, analysts, data scientists and developers
Be proactive with a self-starter attitude
Be a good listener, while also having strong opinions on what is right
Be fun to be around :)
Requirements:
Bachelors degree in Information Systems, Computer Science or similar
Extensive experience dealing directly with internal customers regarding their data needs
Excellent knowledge of SQL in a large-scale data warehouse or data lakehouse environment such as Spark, Databricks, Presto/Athena/Trino
Experience in designing, building and maintaining highly scalable, robust & fault-tolerant complex data processing pipelines from the ground up (ETL, DB schemas)
Experience with stream processing or near real-time data ingestion
Experience working in cloud environment, preferably AWS (EC2, S3 EMR elastic map)
Excellent knowledge of database / dimensional modeling / data integration tools
Experience writing scripts with languages like Python, and shell scripts in a Linux environment
Can-do attitude, hands-on approach, passionate about data
Preferred :
Some knowledge of Data Science/Machine Learning
Knowledge/Experience with Scala, Java
Knowledge of data visualization tools like Tableau or Qlik Sense
Some knowledge of graph databases
Some experience in Fintech industry, Cyber Security
Working with AI tools and leveraging AI into product development.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8574787
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a talented Data Engineer to join our Data team and spread the power. The Data team develops and maintains the infrastructure for internal data and product analytics. We are looking for data enthusiasts, independent, logical thinkers with a can-do approach and a passion for problem-solving. We run a SaaS-based stack using BigQuery, Snowflake and dbt.
WHAT YOU'LL DO
Design, build, and maintain data pipelines, datasets and catalogs for fast-growing products and business groups.
Develop self-service data analytics solutions and infrastructure.
Support ad hoc needs and requests of internal stakeholders.
Collaborate with analysts, engineers, and internal customers from Product, Finance, Revenue, and Marketing.
Requirements:
Bachelors or Masters degree in a relevant technical field, or equivalent hands-on experience in software development or DevOps.
3+ years of experience working as a Data Engineer, including end-to-end designing, orchestrating, and building cloud-based data pipelines (e.g., Airflow, Prefect, Dagster).
3+ years of experience with dimensional data modeling and data warehouse implementation, specifically MPP databases like BigQuery, Snowflake, and Redshift.
Strong knowledge of Python and Python-based data analysis tools such as Jupyter Notebooks and pandas.
Strong SQL writing skills. Ability to write highly performant queries.
Strong track record of executing projects independently in dynamic environments.
Fast understanding of data and business needs and ability to translate them into data models.
Team player with excellent communication skills.
Containerization (Docker): Essential for reproducible environments.
Knowledge of software engineering best practices: CI/CD concepts, code reviews, and unit testing.
ADVANTAGE
Production-level experience with dbt, including project design, transformation, testing, and documentation.
Infrastructure-as-Code (Terraform): Managing cloud resources (S3 buckets, IAM roles) via code.
CI/CD pipelines (GitHub Actions/Jenkins): Automating the testing and deployment of data models.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8573992
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This is a great opportunity to be part of one of the fastest-growing infrastructure companies in history, an organization that is in the center of the hurricane being created by the revolution in artificial intelligence.
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8572794
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
06/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Staff AI Engineer to join our growing AI and engineering group. This is a highly impactful role for someone who wants to shape how AI is embedded across products, platforms, and business operations-from early experimentation to production-scale systems.As a Staff AI Engineer, youll operate end-to-end: identifying opportunities, designing solutions, and delivering real business value using modern AI and LLM-based systems.

What you'll be doing
Design, build, and deploy AI-powered solutions across Supersonics products and internal platforms
Partner closely with product, data, engineering, and business stakeholders to identify and execute high-impact AI use cases
Architect and implement agentic workflows using frameworks such as LangChain, LangGraph, or equivalent
Integrate and operate LLMs via SDKs, including prompt design, structured outputs, tool/function calling, and guardrails
Lead technical decisions around AI evaluation, monitoring, observability, and quality measurement
Own AI initiatives from concept through production, including iteration, scaling, and long-term maintenance
Stay up to date with emerging AI technologies, models, and best practices, and help bring them into production thoughtfully
Requirements:
8+ years of software engineering experience, with at least 2+ years in AI/ML platforms or intelligent automation.
Strong background in distributed systems, APIs, microservices, container orchestration (ECS/EKS), and cloud platforms (AWS/GCP/Azure)
Proven experience building and deploying LLM-based applications using platforms such as OpenAI, Anthropic, AWS Bedrock, or similar
Solid understanding of RAG pipelines, vector databases, embeddings, and retrieval techniques (chunking, indexing, filtering, relevance tuning)
Ability to thrive in ambiguous problem spaces, take ownership, and move quickly in a startup-like environment
Strong problem-solving and analytical skills, with a product- and impact-driven mindset
Experience collaborating across teams and leading technical projects end-to-end
Bachelors degree in Computer Science, AI, or a related field (or equivalent practical experience)
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569987
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

Responsibilities
Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
Take part in developing agentic capabilities.
Mentor, support, and guide junior team members, sharing expertise and fostering their professional development.
Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
Apply and be responsible for best practices in data security, integrity, and performance across all systems.
Requirements:
6+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
Proven track record in designing, developing, and deploying complex data applications.
Hands-on experience with orchestration and processing tools such as Apache Airflow and Apache Spark.
Deep experience with public cloud platforms, and expertise in cloud-based data storage and processing.
Experience working with Docker and Kubernetes.
Hands-on experience with CI tools such as GitHub Actions.
Bachelors degree in Computer Science, Information Technology, or a related field - or equivalent practical experience.
Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
Excellent communication skills and a strong team player, capable of working cross-functionally.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569768
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are a data driven company, the data Engineering team is the bridge between raw data and impactful business insights. You will lead a team of talented engineers to architect, build, and maintain the data infrastructure of our company. Your mission is to ensure our data is reliable, secure, and ready for everything from standard BI reporting to cutting-edge Generative AI applications. Youll act as a key partner to our business teams, translating high-level needs into technical reality and driving a massive impact on our bottom line. In addition youll spend time mentoring your team as well as tackling high-level architecture decisions, pushing technical strategy and contributing to the codebase.
What am I going to do?:

* Lead a team that designs and scales Real-Time data pipelines processing millions of events hourly.
* Utilize cutting-edge technologies to build scalable and high performant data infrastructure.
* Influence and drive architecture and data strategy decisions.
* Work closely with other stakeholders such as data Developers, Analysts, data Science and R&D.
* Hands-on development of data infrastructure tools (~50% of the time).
* Lead and mentor the team talent.
* Finops and governance of data infrastructure domains.
Equal opportunities:
At our company, we prioritize diversity. We celebrate difference and embed it into every aspect of our workplace and product, as well as our community. we are proud and committed to providing equal opportunity employment to all individuals regardless of race, color, religion, sex, sexual orientation, citizenship, national origin, disability, Veteran status, or any other characteristic protected by law. In addition, we will provide accommodation to individuals with disabilities or a special need.
Requirements:
* 5+ years of hands-on experience in data engineering, with at least 2 years in a team leadership role.
* Strong programming skills in Python.
* Experience with relevant technologies such as BigQuery, Airflow/Prefect, DBT, Kafka, Athena, BI Tools.
* Proven track record of design and implementation of highly scalable distributed data pipelines.
* Deep understanding of data modeling, ETL and Real-Time analytics.
* Great 360 collaboration with excellent communication skills.
* MSc./BSc. in Computer Sciences from a top university. At our company, were not about checklists. If you dont meet 100% of the requirements for this role but still feel passionate about the position and think you have the right skills and qualifications to excel at it, we want to hear from you.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569534
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Join global data team building the infrastructure that powers analytics and AI-driven insights across the organization. As a Data Engineer, you'll design and maintain scalable data pipelines and systems that enable product analytics, business intelligence, and AI applications in our high-growth fintech environment.
What you'll do:
Design, build, and maintain robust batch and streaming pipelines and orchestration across our modern stack (AWS, DBT, Airbyte, Airflow, Snowflake) to support AI-driven data products.
Develop and optimize data models in Snowflake, ensuring data quality, consistency, and performance at scale.
Collaborate with Product Analysts and AI specialists to implement data solutions for customer segmentation, ranking systems, and predictive models.
Partner with cros-functional teams to translate technical requirements into scalable data architecture.
Implement end-to-end observability (data quality checks, monitoring, alerting) and cost/performance optimization in Snowflake and AWS.
Requirements:
6+ years of experience as a Data Engineer or similar role.
Expert proficiency in SQL and hands-on experience with modern data warehouse platforms (Snowflake - advantage).
Strong experience building ETL/ELT pipelines using tools like DBT, Airflow, or similar orchestration frameworks.
Proficiency in Python or another programming language for data processing.
Solid understanding of data modeling techniques (dimensional modeling, Data Vault, etc).
Experience with cloud platforms, preferably AWS.
Proven ability to design and maintain reliable, scalable data systems with a strong focus on data quality.
Strong communication skills and the ability to work effectively with both technical and business stakeholders.
Experience with fintech, financial services, or cryptocurrency/blockchain - advantage.
Familiarity with real-time data processing or streaming technologies - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569095
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly enthusiastic Data Engineer to join our group on the journey of leveraging Big Data to revolutionize our offering, business operations and decision making across the companys ecosystem.


Job Responsibilities:

Design, build & deploy backend data solutions to prod, starting from research and design to development and testing.
Work closely with data engineers, product, architects and other R&D teams to deliver the best solutions to the business.
Monitor the solutions in production to make sure they are fully stable, scalable and performant at all times.
Work closely with data sciences experts
Requirements:
Profile and Experience:

At least 3 Years of proven hands on experience with big data solutions and frameworks in production (Spark, Flink) - mandatory
Proven ability of writing complex SQL queries - mandatory
Strong analytical and problem-solving skills with attention to details - mandatory
Production grade experience of writing spark applications using Scala or Java - mandatory
Experience with Apache Airflow and AWS tools (EMR, Glue, Athena) - a big advantage
Solid knowledge in Python and Linux operating systems - a big advantage
Experience with Clickhouse - a big advantage
Familiarity with the Ad Tech industry and RTB - an advantage
Extensive experience in Functional programming, Unit testing/TDD, continuous Deployment - an advantage
Familiarity with NodeJS - an advantage
Fluent verbal and written English skills required
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569028
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
מיקום המשרה: מרכז
סוג משרה: משרה מלאה
דרוש/ה Pre-Sale ברעננה להובלת תהליכי מכירה של פתרונות data בתחומי אינטגרציית נתונים, איכות נתונים, ממשל נתונים ואינטגרציית API.
לשיתוף פעולה עם מנהלי לקוחות וממשקים פנימיים, והובלת ההיבט הטכנולוגי בתהליך המכירה מול לקוחות קיימים ופוטנציאליים.
תחומי אחריות:
זיהוי צרכים עסקיים והתאמתם לפתרונות הדאטה והאנליטיקה של החברה (כגון Talend, Qlik Sense).
הצגת הערך המוסף והובלת תהליכי הוכחת יכולת (POC) ומעקב אחריהם.
ליווי תהליכי המכירה מההיבט הטכנולוגי לכל אורך הדרך, תוך שיתוף פעולה צמוד עם צוותי המכירה.
ביצוע הכשרות טכנולוגיות שוטפות לצוותי המכירות ולעובדים חדשים.
שמירה על עדכניות מקצועית והיכרות עם חידושי טכנולוגיה ומוצר.
ביצוע הדרכות וכשירויות לאנשי המכירות.
שכר 28-32 K
דרישות:
ניסיון 2+ שנים בתפקידי data Pre-Sale, או בתפקידי data Engineer המעוניינים בתפקיד Pre-Sale - חובה
תואר ראשון רלוונטי או הכשרה מקצועית רלוונטית - חובה.
הבנה חזקה של השוק העסקי הישראלי ויכולת חשיבה עסקית.
כושר ורבלי גבוה, יכולת שכנוע והצגת פתרונות טכנולוגיים מול לקוחות.
אנגלית ברמה גבוהה מאוד (קריאה וכתיבה).
יכולת הדרכה טובה.
יתרונות:
ניסיון בפיתוח תהליכי ETL ו- data Pipelines.
שליטה ב-SQL ובשפת תכנות אחת לפחות: Python / Scala / JAVA.
ניסיון בעבודה עם טכנולוגיות כגון: Spark, Kafka, Talend.
ניסיון עם פתרונות ענן (AWS, Azure, GCP).
ניסיון בתהליכי POC. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
8568741
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו