דרושים » תוכנה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Herzliya
Job Type: Full Time
As a Data Engineer , you will own the architecture and optimization of large-scale ETL processes that transform raw heavy-duty vehicle telemetry into production-grade intelligence. You will operate at the intersection of Big Data and AI, building scalable pipelines, enforcing data quality standards, and managing cost-efficiency for a system processing billions of time-series records. You will be a technical owner, collaborating directly with Data Scientists to ensure our fleet intelligence models run reliably in production.
What Youll Do
Architect and build robust ETLs and scalable data pipelines on Databricks and AWS.
Optimize high-throughput ingestion workflows for billions of time-series records, ensuring low latency and data integrity.
Engineer data validation frameworks and automated monitoring to proactively detect anomalies before they impact models.
Drive cost-efficiency by tuning Spark jobs and managing compute resources in a high-volume environment.
Transform raw IoT/telemetry signals into structured, enriched Feature Stores ready for Machine Learning production.
Define best practices for data engineering, CI/CD for data, and lakehouse architecture across the organization.
Requirements:
Production Experience: 3+ years in Data Engineering with strong proficiency in Python, SQL, and PySpark.
Big Data Architecture: Proven track record working with distributed processing frameworks (Spark, Delta Lake) and cloud infrastructure (AWS preferred).
Scale: Experience handling high-volume datasets (TB scale or billions of rows); familiarity with time-series or IoT data is a strong advantage.
Engineering Rigor: Deep understanding of data structures, orchestration (Databricks Workflows), and software engineering best practices (Git, CI/CD).
Problem Solving: Ability to diagnose complex performance bottlenecks in distributed systems and implement cost-effective solutions.
Ownership: A self-starter mindset with the ability to take a vague requirement and deliver a deployed, production-ready pipeline.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8494087
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/12/2025
חברה חסויה
Location: Herzliya
Job Type: Full Time and Hybrid work
we are looking for a Data Engineer
We work in a flexible, hybrid model, so you can choose the home-office balance that works best for you.
Responsibilities
Design, build, and maintain scalable ETL/ELT pipelines to integrate data from diverse sources, optimizing for performance and cost efficiency.
Leverage Databricks and other modern data platforms to manage, transform, and process data
Collaborate with software teams to understand data needs and ensure data solutions meet business requirements.
Optimize data processing workflows for performance and scalability.
Requirements:
3+ years of experience in Data Engineering, including cloud-based data solutions.
Proven expertise in implementing large-scale data solutions.
Proficiency in Python, PySpark.
Experience with ETL / ELT processes.
Experience with cloud and technologies such as Databricks (Apache Spark).
Strong analytical and problem-solving skills, with the ability to evaluate and interpret complex data.
Experience leading and designing data solutions end-to-end, integrating with multiple teams, and driving tasks to completion.
Advantages
Familiarity with either On-premise or Cloud storage systems
Excellent communication and collaboration skills, with the ability to work effectively in a multidisciplinary team.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8470043
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
03/12/2025
חברה חסויה
Location: Herzliya
Job Type: Full Time
Required Senior Data Engineer
Description
Join our core platform engineering team, developing our AI-powered automotive data management platform.
We are developing the next generation data-driven products for the Automotive industry, focusing on cybersecurity (XDR) and vehicle quality. our products monitor and secure millions of vehicles worldwide and help automakers leverage connected vehicle data to deliver cyber resilience, safety, customer satisfaction and increase brand loyalty.
Our Data Engineering & Data Science Group leads the development of our Iceberg-based data platform, including data lake, query engine, and ML-Ops tools, serving as a solid AI-ready foundation for all our products.
At the core of our Engineering Team, you will build and operate scalable, production-grade customer-facing data and ML platform components, focusing on reliability and performance.
Technological background and focus: Iceberg, Trino, Prefect, GitHub Actions, Kubernetes, JupyterHub, MLflow, dbt
This role is full-time and is Herzliya, Israel based.
Responsibilities
Design, build, and maintain scalable data pipelines to ingest and transform batch data on our data lake, enabling analytics and ML with strong data quality, governance, observability, and CI/CD.
Build and expand our foundational data infrastructure, including our data lake, analytics engine, and batch processing frameworks.
Create robust infrastructure to enable automated pipelines that will ingest and process data into our analytical platforms, leveraging open-source, cloud-agnostic frameworks and toolsets.
Develop and maintain our data lake layouts and architectures for efficient data access and advanced analytics.
Build our ML platform and automate the ML lifecycle.
Drive business-wide ML projects from an engineering perspective.
Develop and manage orchestration tools, governance tools, data discovery tools, and more.
Work with other team members of the engineering group, including data architects, data analysts, and data scientists, to provide solutions using a use case-based approach that drives the construction of technical data flows.
Requirements:
BSc/BA in Computer Science, Engineering or a related field
At least 5 years of experience with designing and building data pipelines, analytical tools and data lakes
At least 7 years of development experience, using a general purpose programming language (Java, Scala, Kotlin, Go, etc.)
Experience with the data engineering tech stack: ETL & orchestration tools (e.g. Airflow, Argo, Prefect), and distributed data processing tools (e.g Spark, Kafka, Presto)
Experience with Python is a must
Experience working with open-source products - Big advantage
Experience working in a containerized environment (e.g. k8s) - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8440764
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/12/2025
Location: Herzliya
Job Type: Full Time
We are seeking an experienced Data Engineer with a strong background in BI and Data Engineering, excellent problem-solving skills, and the ability to work independently and learn quickly.
Role Overview
You will join a dynamic team responsible for designing and implementing end-to-end data solutions on the Microsoft Fabric platform. This role involves:
Building and maintaining robust data processes.
Driving adoption of new technology within the organization.
Ensuring scalability, performance, and reliability of data solutions.
Responsibilities:
Collaborate with leaders, stakeholders, and product managers to understand data requirements and deliver effective BI solutions.
Design, develop and maintain robust data pipelines for efficient data ingestion, processing, and storage.
Optimize data storage and retrieval processes for optimal performance.
Monitor data pipelines and systems consistently to identify performance issues, ensuring timely maintenance and troubleshooting.
Create and maintain documentation of data processes and systems.
Requirements:
Experience & Skills:
Minimum of 3 years of experience as a BI Developer or Data Engineer.
Proficient in SQL.
Practical experience with Spark.
Experience with ETL/ELT tools and cloud data platforms, preferably Microsoft Fabric.
Understanding of data warehouse and lake house architectures.
Familiarity with source control systems (e.g., Azure DevOps or equivalent).
Personal Attributes:
Self-reliant, goal-driven, and committed to quality.
Capable of multitasking and efficient time management.
Excellent interpersonal and communication skills.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8470068
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
חברה חסויה
Location: Herzliya
Job Type: Full Time
We are now looking for a Data Engineer to join our team and play a key role in building and optimizing large-scale Big Data systems in production environments.

Key Responsibilities:
Design, implement, and maintain Big Data pipelines in production.
Work extensively with Apache Spark (2.x and above), focusing on complex joins, shuffle optimization, and performance improvements at scale.
Integrate Spark with relational databases, NoSQL systems, cloud storage, and streaming platforms.
Contribute to system architecture and ensure scalability, reliability, and efficiency in data processing workflows.
Requirements:
Proven hands-on experience as a Data Engineer in production Big Data environments.
Hands-on experience in Python development is required.
Expertise in Apache Spark, including advanced performance optimization and troubleshooting.
Practical experience with complex joins, shuffle optimization, and large-scale performance improvements.
Familiarity with relational and NoSQL databases, cloud data storage, and streaming platforms.
Strong understanding of distributed computing principles and Big Data architecture patterns.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8466276
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Herzliya
Job Type: Full Time
We are looking for a talented and experienced big data engineer to join our cyber defence group to take over a cutting-edge data solution.

The successful candidate should have hands-on experience with big data processing technologies and architectures.

The candidate will join our growing team of analysts, developers, data scientists, and architects who design and develop innovative solutions to the most critical needs of our customers.

Responsibilities:
Designing, architecting, implementing, and supporting our data pipelines and processing flows.
Collaborate with analytics department to analyze and understand data sources.+
Provide insights and guidance on database technology and data modelling best practices.
Insure the maintenance of data integrity, managing data and analysis flows with attention to detail and high responsibility.
Implementing algorithms with our data scientists.
Requirements:
Requirements:
BSc/BA in Computer Science or similar.
At least 5 years proven experience as a Big Data Engineer.
At least 3 years of experience with Python.
Experience with both SQL and NoSQL databases, including Elastic Search, Splunk, MongoDB.
Experience with processing of large data sets.
Experience with Linux environment.
Experience with software design and development in a test-driven environment.

Advantages:
Previous experience in the cybersecurity industry or in elite technology units in IDF.
Experience with OpenShift, S3, AWS, Docker, Kubernetes, Java and TypeScript.
Experience working with CI/CD software (Jenkins, GitLab).
Experience with AirFlow.
Experience with Nifi.

Personal skills:
Good communication skills, excellent organisational and time management skills, accuracy and attention to details, high responsibility, independent, self-learner, highly motivated, open minded, problems resolver, team player.

Nice to know:
We have a very strong culture of work-life balance.
You will contribute to amazing projects that do the unbelievable daily.
You will join a great team with good vibes and a can-do approach.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8439312
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
Our Data Solutions Group is looking for an exceptional Python developer with Data Engineering experience.
Requirements:
Requirements:
Excellent team player.
Excellent Python developer.
BSc Computer Science or similar.
2+ years of experience with big data frameworks such as Spark or Hadoop.
2+ years of experience in managing ETLs in AirFlow or similar, multi-source data flows and processing of large data sets.
3+ years of experience with SQL.
Attention to details, high responsibility and open minded.
Ability to take initiative with a self-motivated attitude and reliability.
Critical thinker and problem-solving skills.

Advantages:
Proven capabilities in data analysis and data-driven conclusions.
Experience with the ElasticSearch suite.
Java development experience.
Experience with Linux environment.
Background in the cyber domain.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8439338
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/12/2025
Location: Herzliya
Job Type: Full Time
At Infinidat, we help enterprises and service providers empower their data-driven competitive advantage at scale. We are a leading provider of enterprise-class storage solutions. The company’s software-focused architecture delivers sub-millisecond latency, 100% availability, and scalability with a significantly lower total cost of ownership than competing storage technologies. We’re looking for passionate and bright individuals who wish to take part in the design, development, and all technical aspects of INFINIDAT’s Big Data team. The selected candidate will be part of a professional team that develops various applications - from massive infrastructure, through complicated distributed systems, to plug-ins for 3rd-party enterprise applications. Most development is in Python and some other languages as well, while using a wide range of operating systems, technologies, and architectures. Our big data solutions are cloud-based with secure access allowing predictive analytics (trends, anomaly detection, planning, etc.), early issues detection while analyzing a massive stream of events, and much more. You can be part of it!



Responsibilities:

* Design and implement RESTful services, as part of the data pipeline, using Python in a Big Data environment.
* Integrate solutions with existing systems and technologies.
* Troubleshoot and debug existing solutions.
* Write and maintain documentation for solutions.
* Participate in code reviews, providing feedback and suggestions for improvement.
Requirements:
* 2+ years of strong backend programming and design skills in Python
* Database knowledge - PostgreSQL/MySQL (or similar)
* Strong understanding of OOP principles and design patterns
* Experience in Linux and Docker
* Excellent teamwork and interpersonal communication skills Preferred Qualifications:
* Development of data pipelines and distributed systems (microservices - advantage)
* Web development and experience with JavaScript
* Used RabbitMQ as a communication broker in one of your projects
* Familiar with Kubernetes - advantage
* Cloud environments (AWS - advantage)
* Experience with ClickHouse - advantage
About Infinidat Award-winning Infinidat is a global provider of enterprise storage platforms and solutions. Installed in Global Fortune 500 companies around the world supporting various applications and use cases, our software-defined storage architecture delivers microsecond latency, autonomous automation, robust cyber storage resilience, 100% guaranteed availability, and dramatically reduces enterprise storage CAPEX and OPEX. The company’s portfolio has been named a Leader for five consecutive years in the Gartner® Magic Quadrant™ for primary storage and has received numerous other awards from the storage analyst and storage press community. Our success continues to be based on the talent, experience, and motivation of employees who leverage their skills and abilities to make a real impact in the industry. If you have the power to be the best of the best, we would love to have you join the Infinidat family and make a mark as an essential player in our company’s growth.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8458240
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/11/2025
Location: Herzliya
Job Type: Full Time
At Infinidat, we help enterprises and service providers empower their data-driven competitive advantage at scale. We are a leading provider of enterprise-class storage solutions. The company’s software-focused architecture delivers sub-millisecond latency, full availability, and scalability with a significantly lower total cost of ownership than competing storage technologies. We are looking for a Python Developer to join our growing Hardware Infra team , part of the RnD group You will play a crucial role in architecting and developing the core of the infrastructure and tools that power our innovative solutions and ensure all our products are tested, shipped, and maintained with the highest quality standards. If you are innovative and passionate and looking for creative ways to improve product testing architecture and infrastructure - come and join us!

Responsibilities:

* Enhance and maintain appliance tools that monitor our products in a customer environment.
* Design and develop robust and scalable testing infrastructure and complex test scenarios ensuring high-performance and reliable products.
* Full ownership over the design and implementation of constantly improving web applications, API backends, and appliance tools.
* Collaborate with cross-functional teams including DevOps, Software Engineering, Hardware, and Validation to design and implement efficient, reusable, and reliable applications.
* Investigate and solve hard-to-find problems such as race conditions, hardware failures, delicate timing issues, and problems happening only at large scale.
Requirements:
* At least 4 years of working experience with Python.
* Proven experience in linux/unix environments.
* Problem-solver along with the ability to work independently and as a team.
* Organized, pays attention to small details. Able to manage multiple tasks simultaneously.
* Fluent English speaking and writing skills.
* Experience in Git and GitLab pipeline.
* Proven experience with complex large-scale systems - an advantage.
* Experience with Relational Database Management Systems - an advantage.
* Experience in debugging hardware problems (i.e server, SSD, JBOD, spinning drives, Infiniband, firmware, PDU, BBU, etc) - an advantage.
* Experience with storage hardware elements and protocols (i.e. SAS, SATA, NVMe, NAS, SAN and similar) - an advantage.
* A Bachelor’s degree in Computer Science/Electrical Engineering or equivalent education - an advantage. ?About Infinidat Infinidat’s enterprise storage portfolio provides global Fortune 500 enterprises and service providers with best-in-class solutions for primary storage, next-generation data protection, disaster recovery, business continuity, and cyber resilience. Infinidat’s acclaimed InfuzeOS is the one of the most flexible and complete enterprise software-defined storage architectures in the industry. We recently announced powerful enhancements, an extensive expansion, and the dynamic evolution of our award-winning G4 enterprise cyber and AI storage solutions! Not only has Infinidat won 22 awards in the first half of 2025 , but has also been a 7-time winner of the Gartner® Peer Insights™ Voice of the Customer Award for Primary Storage and we can go on, and on, and on!?
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8276030
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/12/2025
Location: Herzliya
Job Type: Full Time
We are looking for a Senior Live Streaming Software Engineer.
As a Senior Software Engineer in our Edge AI group, you will design, develop, and maintain media encoding pipelines and live streaming workflows for both cloud and on-premises environments.
You will build infrastructure, tools, and real-time monitoring systems that ensure reliable live video delivery and operational visibility.
Your work will involve writing code that integrates with Azure resources and extensions, leveraging modern technology stacks and methodologies.
You will be expected to break down complex problems, create clear execution plans, and take full ownership of your code from development through production. Collaboration within a multi-disciplinary team will be key, requiring strong communication skills and alignment with values.
In addition, you will automate quality control and alerting mechanisms to rapidly detect and resolve streaming issues, ensuring a seamless experience for customers streaming camera feeds and viewing live video alongside AI-driven insights.
Requirements:
5+ years of experience with SW development using C#, Java, Python or similar language.
Practical experience with AI-based or agentic development tools (e.g., GitHub CopilotAgent, Cursor, Claude Code, Cline).
Highly Familiar with distribution formats such as MPEG-TS, HLS, MPEG-DASH, and CMAF, including segmenting and packaging for live and on-demand delivery.
Solid understanding of end-to-end streaming systems design: ingest (e.g., RTSP), processing/analytics pipelines, packaging/origin, CDN delivery, player behavior, and operational observability (metrics, logging, alerting)
Other-
B.Sc. in Computer Science or equivalent
Ability to automate quality control and alerting for streaming workflows to detect and resolve streaming issues rapidly.
Familiarity with video transport protocols such as RTSP, RTP, RTMP, SRT and related streaming technologies.
WebRTC experience for interactive streaming scenarios.
Proven experience with real-time or streaming data processing (e.g. Kafka or similar).
Proven ability to lead complex tasks in unfamiliar domains and deliver them to production.
Experience developing and operating code in cloud environments (Azure / AWS / GCP).
Experience with Docker, Kubernetes, and modern CI/CD practices.
Team player with proven communication skills
A proactive, ownership-driven approach with the ability to lead complex projects end-to-end.
Proven problem-solving and coding skills with a passion for elegant architecture.
Demonstrated ability to think creatively and being resilient to change
Hands-on experience with DeepStream (by NVIDIA) for building and operating real-time video analytics and streaming pipelines
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8446729
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Herzliya
Job Type: Full Time
Were looking for an experienced MLOps Engineer to join our team and help design, implement, and maintain scalable machine learning infrastructure and data processing pipelines.
The ideal candidate is passionate about operational excellence, automation, and building reliable systems that empower data scientists and engineers alike.
This role is responsible for enhancing, automating, monitoring, and optimizing data pipelines that collect, transform, cache, index, and manage large-scale datasets.
Requirements:
5+ years of hands-on experience in MLOps, with a focus on Python-based ML workflows
Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)
Solid understanding of data engineering principles, model serving, and monitoring
Familiarity with cloud-based AI/ML solutions, especially AWS - a strong advantage
Familiarity with Rust - an advantage.
Strong interpersonal skills
Technologically versatile, quick learning
Strong drive to build robust, sustainable solution
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8480105
סגור
שירות זה פתוח ללקוחות VIP בלבד