דרושים » דאטה » Data engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Merkaz
Job Type: Full Time
abra R&D is looking for a Data engineer! We are looking for a Data Engineer to join the team and contribute to AI-related projects. The role involves handling large volumes of incoming data, performing deep analysis, and collaborating closely with Data Scientists. You will be responsible for designing and developing critical, diverse, and large-scale data pipelines in both cloud and on-premise environments.
Requirements:
Minimum 5 years of experience as a Data Engineer mandatory 5 years of experience working with Object-Oriented Programming (OOP) languages mandatory 5 years of hands-on experience with Python mandatory Hands-on experience with Spark for large-scale data processing mandatory At least 2 years of practical experience with AWS , including services such as Athena, Glue, Step Functions, EMR, Redshift, and RDS strong advantage
* Deep understanding of design, development, and optimization of complex solutions handling or processing large-scale data
* Familiarity with optimization techniques and working with data partitioning and formats such as Parquet, Avro, HDF5, Delta Lake
* Experience working with Docker, Linux, CI/CD tools, and Kubernetes
* Experience with data pipeline orchestration tools like Airflow or Kubeflow Bachelor’s degree in Computer Science, Engineering, Mathematics, or Statistics – mandatory
* Understanding of machine learning concepts and workflows
* Familiarity with GenAI solutions or prompt engineering advantage
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8260993
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer for the Insight Team to join Lushas Data Group and a new team responsible for developing innovative features based on multiple layers of data. These features will power recommendation systems, insights, and more. This role involves close collaboration with the core teams within the Data Group, working on diverse data pipelines that tackle challenges related to scale and algorithmic optimization, all aimed at enhancing the data experience for Lushas customers.
What will you be responsible for?
Develop and implement robust, scalable data pipelines and integration solutions within Lushas Databricks-based environment.
Develop models and implement algorithms, with a strong emphasis on delivering high-quality results.
Leverage technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
Design innovative data solutions that support millions of data points, ensuring high performance and reliability.
Requirements:
3+ years of hands-on experience in data engineering, including building and optimizing scalable data pipelines
5+ years of experience as a software developer, preferably in Python
Strong algorithmic background, including: Development and optimization of machine learning models, Implementation of advanced data algorithms
Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or equivalent platforms (Azure, GCP)
Expertise in extracting, ingesting, and transforming large-scale datasets in an efficient and reliable manner
Deep knowledge of big data platforms such as: Apache Spark, Databricks, Elasticsearch, and Kafka particularly for real-time data streaming and processing
(Nice-to-have) Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.
AI-savvy: comfortable working with AI tools and staying ahead of emerging trends.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280793
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/07/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are making the future of Mobility come to life starting today.
At our company we support the worlds largest vehicle fleet operators and transportation providers to optimize existing operations and seamlessly launch new, dynamic business models - driving efficient operations and maximizing utilization.
At the heart of our platform lies the data infrastructure, driving advanced machine learning models and optimization algorithms. As the owner of data pipelines, you'll tackle diverse challenges spanning optimization, prediction, modeling, inference, transportation, and mapping.
As a Senior Data Engineer, you will play a key role in owning and scaling the backend data infrastructure that powers our platformsupporting real-time optimization, advanced analytics, and machine learning applications.
What You'll Do
Design, implement, and maintain robust, scalable data pipelines for batch and real-time processing using Spark, and other modern tools.
Own the backend data infrastructure, including ingestion, transformation, validation, and orchestration of large-scale datasets.
Leverage Google Cloud Platform (GCP) services to architect and operate scalable, secure, and cost-effective data solutions across the pipeline lifecycle.
Develop and optimize ETL/ELT workflows across multiple environments to support internal applications, analytics, and machine learning workflows.
Build and maintain data marts and data models with a focus on performance, data quality, and long-term maintainability.
Collaborate with cross-functional teams including development teams, product managers, and external stakeholders to understand and translate data requirements into scalable solutions.
Help drive architectural decisions around distributed data processing, pipeline reliability, and scalability.
Requirements:
4+ years in backend data engineering or infrastructure-focused software development.
Proficient in Python, with experience building production-grade data services.
Solid understanding of SQL
Proven track record designing and operating scalable, low-latency data pipelines (batch and streaming).
Experience building and maintaining data platforms, including lakes, pipelines, and developer tooling.
Familiar with orchestration tools like Airflow, and modern CI/CD practices.
Comfortable working in cloud-native environments (AWS, GCP), including containerization (e.g., Docker, Kubernetes).
Bonus: Experience working with GCP
Bonus: Experience with data quality monitoring and alerting
Bonus: Strong hands-on experience with Spark for distributed data processing at scale.
Degree in Computer Science, Engineering, or related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8238970
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to design and implement high-scale, data-intensive platforms, research and develop algorithmic solutions, and collaborate on key company initiatives. You will play a critical role within core data teams, which are responsible for managing and optimizing fundamental data assets.
What will you be responsible for?
Solve Complex Business Problems with Scalable Data Solutions
Develop and implement robust, high-scale data pipelines to power core assets.
Leverage cutting-edge technologies to tackle complex data challenges and enhance business operations.
Collaborate with Business Stakeholders to Drive Impact
Work closely with Product, Data Science, and Analytics teams to define priorities and develop solutions that directly enhance core products and user experience.
Build and Maintain a Scalable Data Infrastructure
Design and implement scalable, high-performance data infrastructure to support machine learning, analytics, and real-time data processing.
Continuously monitor and optimize data pipelines to ensure reliability, accuracy, and efficiency.
Requirements:
3+ years of hands-on experience designing and implementing large-scale, server-side data solutions
4+ years of programming experience, preferably in Python and SQL, with a strong understanding of data structures and algorithms
Proven experience in building algorithmic solutions, data mining, and applying analytical methodologies to optimize data processing and insights
Proficiency with orchestration tools such as Airflow, Kubernetes, and Docker Swarm, ensuring seamless workflow automation
Experience working with Data Lakes and Apache Spark for processing large-scale datasets strong advantage
Familiarity with AWS services (S3, Glue, EMR, Redshift) nice to have
Knowledge of tools such as Kafka, Databricks, and Jenkins a plus
Strong command of a variety of storage engines, including Relational (PostgreSQL, MySQL), Document-based (MongoDB), Time-series / Search (ClickHouse, Elasticsearch), Key-value (Redis)
Comfortable working with AI tools and staying ahead of emerging technologies and trends
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280797
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineer to join our team and help advance our Apps solution. Our product is designed to provide detailed and accurate insights into Apps Analytics, such as traffic estimation, revenue analysis, and app characterization. The role involves constructing and maintaining scalable data pipelines, developing and integrating machine learning models, and ensuring data integrity and efficiency. You will work closely with a diverse team of scientists, engineers, analysts, and collaborate with business and product stakeholders.
Key Responsibilities:
Develop and implement complex, innovative big data ML algorithms for new features, working in collaboration with data scientists and analysts.
Optimize and maintain end-to-end data pipelines using big data technologies to ensure efficiency and performance.
Monitor data pipelines to ensure data integrity and promptly troubleshoot any issues that arise.
Requirements:
Bachelor's degree in Computer Science or equivalent practical experience.
At least 3 years of experience in data engineering or related roles.
Experience with big data Machine Learning - a must !
Proficiency in Python- must. Scala is a plus.
Experience with Big Data technologies including Spark, EMR and Airflow.
Experience with containerization/orchestration platforms such as Docker and Kubernetes.
Familiarity with distributed computing on the cloud (such as AWS or GCP).
Strong problem-solving skills and ability to learn new technologies quickly.
Being goal-driven and efficient.
Excellent communication skills and ability to work independently and in a team.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8276114
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
14/07/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
At our company, were reinventing DevOps and MLOps to help the worlds greatest companies innovate -- and we want you along for the ride. This is a special place with a unique combination of brilliance, spirit and just all-around great people. Here, if youre willing to do more, your career can take off. And since software plays a central role in everyones lives, youll be part of an important mission. Thousands of customers, including the majority of the Fortune 100, trust our company to manage, accelerate, and secure their software delivery from code to production - a concept we call liquid software. Wouldn't it be amazing if you could join us in our journey?
About the Team
We are seeking a highly skilled Senior Data Engineer to join our company's ML Data Group and help drive the development and optimization of our cutting-edge data infrastructure. As a key member of the company's ML Platform team, you will play an instrumental role in building and evolving our feature store data pipeline, enabling machine learning teams to efficiently access and work with high-quality, real-time data at scale.
In this dynamic, fast-paced environment, you will collaborate with other data professionals to create robust, scalable data solutions. You will be responsible for architecting, designing, and implementing data pipelines that ensure reliable data ingestion, transformation, and storage, ultimately supporting the production of high-performance ML models.
We are looking for data-driven problem-solvers who thrive in ambiguous, fast-moving environments and are passionate about building data systems that empower teams to innovate and scale. We value independent thinkers with a strong sense of ownership, who can take challenges from concept to production while continuously improving our data infrastructure.
As a Data Engineer at our company's ML you will...
Design and implement large-scale batch & streaming data pipelines infrastructure
Build and optimize data workflows for maximum reliability and performance
Develop solutions for real-time data processing and analytics
Implement data consistency checks and quality assurance processes
Design and maintain state management systems for distributed data processing
Take a crucial role in building the group's engineering culture, tools, and methodologies
Define abstractions, methodologies, and coding standards for the entire Data Engineering pipeline.
Requirements:
5+ years of experience as a Software Engineer with focus on data engineering
Expert knowledge in building and maintaining data pipelines at scale
Strong experience with stream/batch processing frameworks (e.g. Apache Spark, Flink)
Profound understanding of message brokers (e.g. Kafka, RabbitMQ)
Experience with data warehousing and lake technologies
Strong Python programming skills and experience building data engineering tools
Experience with designing and maintaining Python SDKs
Proficiency in Java for data processing applications
Understanding of data modeling and optimization techniques
Bonus Points
Experience with ML model deployment and maintenance in production
Knowledge of data governance and compliance requirements
Experience with real-time analytics and processing
Understanding of distributed systems and cloud architectures
Experience with data visualization and lineage tools/frameworks and techniques.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8257535
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: More than one
We're seeking an outstanding and passionate Data Platform Engineer to join our growing R&D team.
You will work in an energetic startup environment following Agile concepts and methodologies. Joining the company at this unique and exciting stage in our growth journey creates an exceptional opportunity to take part in shaping Finaloop's data infrastructure at the forefront of Fintech and AI.
What you'll do:
Design, build, and maintain scalable data pipelines and ETL processes for our financial data platform.
Develop and optimize data infrastructure to support real-time analytics and reporting.
Implement data governance, security, and privacy controls to ensure data quality and compliance.
Create and maintain documentation for data platforms and processes
Collaborate with data scientists and analysts to deliver actionable insights to our customers.
Troubleshoot and resolve data infrastructure issues efficiently
Monitor system performance and implement optimizations
Stay current with emerging technologies and implement innovative solutions
Tech stack: AWS Serverless, Python, Airflow, Airbyte, Temporal, PostgreSQL, Snowflake, Kubernetes, Terraform, Docker.
Requirements:
3+ years experience in data engineering or platform engineering roles
Strong programming skills in Python and SQL
Experience with orchestration platforms like Airflow/Dagster/Temporal
Experience with MPPs like Snowflake/Redshift/Databricks
Hands-on experience with cloud platforms (AWS) and their data services
Understanding of data modeling, data warehousing, and data lake concepts
Ability to optimize data infrastructure for performance and reliability
Experience working with containerization (Docker) in Kubernetes environments.
Familiarity with CI/CD concepts
Fluent in English, both written and verbal
And it would be great if you have (optional):
Experience with big data processing frameworks (Apache Spark, Hadoop)
Experience with stream processing technologies (Flink, Kafka, Kinesis)
Knowledge of infrastructure as code (Terraform)
Experience building analytics platforms
Experience building clickstream pipelines
Familiarity with machine learning workflows and MLOps
Experience working in a startup environment or fintech industry
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8232260
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Now were looking for an experienced Software Engineer to join our Data team. In this key role, you will develop our data platform, working on cloud-based microservices data pipelines. As we are building the company data platform, it has direct impact on our customers and also enabled many of the other groups in the organization. You will also be responsible for building microservices that need to work on big data scale (about 250K records/sec) in a law latency as part of a dynamic team.
Key Responsibilities:
End-to-end development of company massive data infrastructures and services.
Researching new technologies and adapting them for use in the companys product
Working closely with the product, DevOps, and security teams.
Requirements:
6+ years of experience with massive large-scale data systems platforms (Storm, Spark, Kafka, SQS...) and design principles (Data Modeling, Streaming vs Batch processing, Distributed Messaging...)
Expertise in one or more of the following languages: Java, Scala, Go
Hands-on experience with design and development production of large-scale distributed systems with an emphasis on performance
Familiarity with no-SQL DBs and relational DBs. Were using technologies such as Elasticsearch, MySQL, Clickhouse, and Redis
Deep understanding of Object-Oriented Programming and software engineering principles
Experience with microservices, k8s - Advantage
Familiar with AWS platform- Advantage
Motivated fast independent learner and great at problem-solving
A team player with excellent collaboration and communication skills.
Bsc. in Computer Science from a known university.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8276872
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows.
Requirements:
5+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack - Must:
Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Kafka, RabbitMQ, or similar for real-time data processing.
Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must
Hands-on experience with Docker and Kubernetes. - Must
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255612
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Lead Data Engineer, youll join a super-talented & motivated engineering team responsible for our Data Engineering team and will take a key role in our site in TLV. You will lead development and work with complex transportation data, design streamlined systems to process data from ride bookings, payments, and more, ensuring its clean, fast and serves our business needs.

What You'll Do:
You will be responsible for scalable data, pipeline, and backend services with challenging data algorithms.
Participate in the end-to-end development cycle of architecture, design, development, QA, deployment, and monitoring.
Work with talented engineers, and a broad forum of experts and stakeholders to create a stable and scalable product.
Build highly scalable, efficient, and available data pipelines/solutions, serving our data across the organization and to stakeholders.
Monitor data quality, proactively correcting discrepancies ensuring data reliability & accuracy.
Requirements:
Passionate about data, with the ability to translate business needs into effective data models.
A self-driven learner, comfortable working independently in a complex technological environment.
Technologically curious, capable of identifying needs, designing POCs, and implementing scalable solutions.
Minimum of 4+ years of experience as a Data Engineer or BI developer- Must.
Minimum of 4+ years of experience with Python / SQL - Must.
Experience with DBT/ Airflow - a big advantage.
Experience with processing data over ICBERG storage - a big advantage.
Experience working with BI tools such as Looker/ Tableau - advantage.
Experience in executing Big Data solutions and in processing Relational DBs - Advantage.
Experience working with AWS big data tools, like: Glue, EMR, Athena,etc - advantage.
BSc/MSc in Computer Science or equivalent.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8259932
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
20/07/2025
חברה חסויה
Location: Herzliya
Job Type: Full Time
As a Senior Data Engineer within our development team, you will play an important role in design, implementation, and ongoing performance optimization of anything data oriented in the R&D department, including ETL/ELT processes, working closely with architects and development teams.

Your duties and responsibilities include:
Lead the design and implementation of scalable data architecture solutions.
Develop, construct, test, and maintain data architectures (e.g., databases, large-scale processing systems).
Identify ways to improve data reliability, efficiency, and quality.
Collaborate with data scientists, data analysts, and other stakeholders to achieve optimal data performance.
Optimize complex SQL queries and perform performance tuning.
Participate in Scrum and Agile development processes.
Implement and manage relational and non-relational databases.
Requirements:
5+ years of experience leading, designing, and implementing data architecture at an enterprise level.
5+ years of experience working with relational databases (for example, PostgreSQL, MSSQL or MySQL).
2+ years of experience working with Elasticsearch.
2+ years of experience working with non-relational databases (for example, MongoDB or Cassandra).
Be exceptionally strong in SQL development - complex queries and performance tuning.
Implement and manage ETL/ELT processes for data integration and transformation.
Proficiency in Azure and Azure Data Factory for data integration and transformation.
Experience with Databricks or similar data processing platforms.
Experience with Scrum and Agile methodology.
Fluent in spoken and written English.
Advantages
Experience with big data technologies such as Hadoop, Spark, or Kafka.
Familiarity with data warehousing solutions.
Knowledge of data governance and data security practices.
Certification in Azure or other relevant technologies.
University or college degree in Computer Science or a related discipline.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8266639
סגור
שירות זה פתוח ללקוחות VIP בלבד