רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
14/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced Data Engineering Team Leader.
In this role, you will lead and strengthen our Data Team, drive innovation, and ensure the robustness of our data and analytics platforms.
A day in the life and how youll make an impact:
Drive the technical strategy and roadmap for the data engineering function, ensuring alignment with overall business objectives.
Own the design, development, and evolution of scalable, high-performance data pipelines to enable diverse and growing business needs.
Establish and enforce a strong data governance framework, including comprehensive data quality standards, monitoring, and security protocols, taking full accountability for data integrity and reliability.
Lead the continuous enhancement and optimization of the data analytics platform and infrastructure, focusing on performance, scalability, and cost efficiency.
Champion the complete data lifecycle, from robust infrastructure and data ingestion to detailed analysis and automated reporting, to maximize the strategic value of data and drive business growth.
Requirements:
5+ years of Data Engineering experience (preferably in a startup), with a focus on designing and implementing scalable, analytics-ready data models and cloud data warehouses (e.g., BigQuery, Snowflake).
Minimum 3 years in a leadership role, with a proven history of guiding teams to success.
Expertise in modern data orchestration and transformation frameworks (e.g., Airflow, DBT).
Deep knowledge of databases (schema design, query optimization) and familiarity with NoSQL use cases.
Solid understanding of cloud data services (e.g., AWS, GCP) and streaming platforms (e.g., Kafka, Pub/Sub).
Fluent in Python and SQL, with a backend development focus (services, APIs, CI/CD).
Excellent communication skills, capable of simplifying complex technical concepts.
Experience with, or strong interest in, leveraging AI and automation for efficiency gains.
Passionate about technology, proactively identifying and implementing tools to enhance development velocity and maintain high standards.
Adaptable and resilient in dynamic, fast-paced environments, consistently delivering results with a strong can-do attitude.
B.Sc. in Computer Science / Engineering or equivalent.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8610119
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Own accuracy and coverage of detection capabilities such as: technology fingerprinting, SSO/login identification, vulnerability-to-technology mapping
Build AI-powered workflows (LLM classifiers, automated rule generation) to scale detection beyond manual rule-writing
Design eval frameworks and feedback loops - golden datasets, precision/recall tracking, regression testing - to keep quality high as automation grows
Tune IP/domain discovery logic: decision rules, blacklists, thresholds, unstructured data parsing
Extend BI pipelines and schemas to enrich asset data
Investigate customer-reported detection gaps; root-cause and fix systematically
Requirements:
Python (scripting, automation, data analysis)
Hands-on AI/LLM engineering - building multi-step pipelines with LLM APIs, not just prompting
Evaluation mindset - experience measuring and maintaining accuracy of automated systems
Regex, SQL, ETL concepts
Web fundamentals (HTML, HTTP, JS) and basic networking (DNS, WHOIS, CIDR)
Familiarity with CVE/CPE vulnerability ecosystem
Git workflow
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608745
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer.
As a Senior Data Engineer, you will be helping us design and build a flexible and scalable system that will allow our business to move fast and innovate. You will be expected to show ownership and responsibility for the code you write, but it doesn't stop there. you are encouraged to think big and help out in other areas as well.
Responsibilities:
Designing and writing code that is critical for business growth
Mastering scalability and enterprise-grade SAAS product implementation
Sense of ownership - leading design for new products and initiatives as well as integrating with currently implemented best-practices
Building and owning production-grade ETL/ELT pipelines that power analytics, ML training, and real-time AI systems
Designing data architectures that support agentic systems, including:
Embeddings and vector-based retrieval
RAG pipelines
Feedback loops and continuous improvement
Review your peer's design and code
Work closely with product managers, peer engineers, and business stakeholders
Requirements:
5+ years of hands-on experience as Software Engineer with Strong Python skills (TypeScript / Node.js is a plus)
Hands on experience in managing major clouds vendors infrastructure (AWS, GCP, Azure)
Proficiency with SQL, modeling and working with relational and non relational databases, and pushing them past their limits
Hands on experience in designing and implementing ML-aware data pipelines (Spark, Airflow), distributed systems and restful APIs
Experience or strong interest in LLMs and agentic systems, including:
Agentic workflows (LangChain, LangGraph)
RAG patterns, Vector databases and embeddings
Evaluating and monitoring AI-driven systems (Langfuse LangSmith)
Familiarity with ML & AI tooling, such as:
Feature stores, training pipelines, or model-serving data flows
ML platforms (MLflow, SageMaker, Vertex, etc.)
Experience with CI/CD, Docker, and Kubernetes
The ability to lead new features from design to implementation, taking into consideration topics such as performance, scalability, and impact on the greater system
Comfortable operating in a fast-moving startup with high ownership and low process
Enjoy communicating and collaborating, sharing your ideas and being open to honest feedback
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608554
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer
Tel Aviv, Israel
About us:
We are international Multi-Cloud experts, utilizing the power of the cloud for smart digital transformation. With 5 sites over 4 continents around the globe, +450 experts, +1000 customers, and +30 years of proven experience, our mission is to deliver the best Multi-Cloud service to our customers, accelerate their business and help them grow. As tech-savvies, To help our customers stay on top of their game, our teams are constantly developing new strategies and tools that will help them improve cloud performance, spending, visibility, control, and automation. Our cloud experts will make any digital transformation a quick, smart, and easy process
What You'll Do:
Design, build, and maintain data pipelines and infrastructure
Develop and implement data quality checks and monitoring processes
Work with engineers to integrate data into our systems and applications
Collaborate with scientists and analysts to understand their data needs.
Requirements:
3 years of experience as a Data Engineer or a related role
Experience with big data technologies such as Hadoop, Spark, or Elastic Search.
Proven experience in designing, building, and maintaining data pipelines and infrastructure
Service in Unit 8200 or another technology unit- An Advance.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608128
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data & Machine Learning Engineer to operate at the intersection of data platform engineering and machine learning enablement. This role is responsible for building scalable, efficient, and reliable data systems while enabling Data Science and Analytics teams to develop and deploy ML-driven features.

You will take ownership of the data and ML infrastructure layer, ensuring that pipelines, storage models, and compute usage are optimized, while also shaping how data workflows and ML solutions are designed across the organization.


Responsibilities
Data Platform & Infrastructure

Design, build, and maintain scalable data pipelines and storage systems supporting analytics and ML use cases
Ensure compute and cost efficiency across pipelines, storage models, and processing workflows
Own and improve data orchestration, transformation, and serving layers (e.g., Spark, DBT, streaming/batch systems)
Build and maintain shared infrastructure components, including:
IO managers and data access abstractions
Integrations with DBT, Spark, and other data frameworks
Internal tooling to improve developer productivity and reliability
ML Enablement & Collaboration

Partner closely with Data Science to design and productions ML solutions for new features and research initiatives
Translate experimental models into robust, scalable production systems
Support feature engineering, training pipelines, and inference workflows
Help define best practices for ML lifecycle management (training, validation, deployment, monitoring)
Data Quality, Governance & Best Practices

Enforce best practices for building and maintaining data processes across Data Analyst and Data Science teams
Define standards for:
Data modeling and transformations
Pipeline reliability and observability
Testing, versioning, and documentation
Improve data quality, consistency, and discoverability across the organization
Performance & Reliability

Optimize systems for performance, scalability, and cost efficiency
Monitor and troubleshoot data pipelines and ML systems in production
Implement observability (logging, metrics, alerting) across data workflows
Requirements:
Strong programming skills in Python (or similar language)
Proven experience building and maintaining production-grade data pipelines
Hands-on experience with data processing frameworks (e.g., Spark or similar)
Familiarity with DBT or modern data transformation workflows
Experience working with cloud environments (AWS, GCP, or Azure)
Solid understanding of data modeling, distributed systems, and ETL/ELT patterns
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8604541
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Research Infra Engineer
The Dream Job
As a Research Infra Engineer, you will build and operate the shared platforms that power our cyber research: data ingestion, connectivity to internal/external systems, scalable analysis environments, and self-serve tools which allow the team moving faster.
Youll partner closely with CyberAI researchers to translate research needs into reliable, secure, cloud-deployed capabilities used across the group. Your goals are to reduces research-toil, improves reproducibility and code quality, and accelerates the path from prototype to shared capability.
The Dream-Maker Responsibilities
Design, implement, and iterate on internal platforms that support research workflows (e.g., data ingestion, enrichment, indexing, search, labeling, evaluation harnesses, experiment tooling).
Develop durable pipelines and connectors to bring in and normalize research data sources.
Create reusable libraries, templates, CLIs, and services that enable researchers to run analyses and experiments safely and repeatably.
Own deployments, reliability, observability, access control, and cost/performance of the research stack so its usable by all researchers.
Work closely with CyberAI researchers on the development of next-generation artificial cyber researchers and AI-driven analysis capabilities.
Requirements:
5+ years of experience building and operating production systems (platform engineering, data engineering, infra, or backend engineering).
Strong software engineering fundamentals (clean architecture, testing, CI/CD, code review, documentation).
Hands-on experience with cloud infrastructure and modern deployment patterns (containers, orchestration, serverless and/or Kubernetes; infrastructure-as-code such as Terraform is a plus).
Experience designing data pipelines and service integrations
Ability to work closely with researchers: turn ambiguous needs into clear requirements, make pragmatic tradeoffs, ship incrementally, and support adoption.
Familiarity with cybersecurity research workflows such Threat Hunting, Malware Research, CTI and more.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603720
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/04/2026
Location: Tel Aviv-Yafo
we are looking for a Senior Solutions Architect, Public Sector.
As a Solutions Architect at AWS, youll build technical relationships with customers of all sizes and operate as their trusted advisor, ensuring they get the most out of the cloud at every stage of their journey.
Youll manage the overall technical relationship between AWS and our customers, making recommendations on security, cost, performance, reliability and operational efficiency to accelerate their challenging projects.
Internally, you will be the voice of the customer, sharing their needs and wants to inform the roadmap of AWS features.
Key job responsibilities:
In this role, your creativity will link technology to tangible solutions, with the opportunity to define or invent cloud-native reference architectures for a variety of use cases.
You will participate in the creation and sharing of best practices, technical content and new reference architectures (e.g. white papers, code samples, blog posts) and evangelize and educate about AWS technology (e.g. through workshops, user groups, meetups, public speaking, online videos or conferences).
If you can educate AWS customers about the art of the possible, while challenging the impossible, come build the future with us.
This role is within the Israel organization and you would be working with Public Sector customers.
Requirements:
- 10+ years of demonstrated experience in one or more of the following areas: Cloud Architecture, Systems Design, Software Development, Infrastructure Architecture, Data Engineering, DevOps, Generative AI.
- Technical Degree (Computer Science, Maths, Engineering or equivalent) and/or relevant tech experience.
- Fluent written and verbal communication skills in English & Hebrew.
- A passion for technology and for learning.
Preferred Qualifications:
- Experience working in a customer-facing role or a role which involved public speaking
- Experience designing, building, refactoring or operating large scale and impactful IT systems - either on premises or in the cloud
- Knowledge of a modern programming language (Python, JavaScript, Go, .Net, Java, etc.) and/or scripting, Infrastructure as Code etc.
- In-depth working knowledge in a technology domain such as distributed internet-scale web or mobile applications, DevOps, Serverless, Big Data, Analytics, Machine Learning, Generative AI, enterprise workloads (SAP, VMware, Windows etc.), high-performance databases (SQL and/or NoSQL), complex networking implementations, highly secured workloads etc.
- Knowledge of presentations and whiteboarding skills with a high degree of comfort speaking with internal and external executives, IT management, and developers.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603305
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603230
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Realize your potential by joining the leading performance-driven advertising company!
As a Staff Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
How youll make an impact:
As a Staff Algo Data Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills - in Java & Python
Experience with TensorFlow - a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603133
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Engineer to own high-impact data products from architecture through production deployment, monitoring, and continuous improvement. This isnt a pure infrastructure role - youll combine strong engineering with product thinking, operational excellence, and awareness of data quality, cost, and business impact.
You will design, implement, test, deploy, and maintain production-grade data products - pipelines, transformation layers, data quality and reliability systems - using tools like DBT (on Spark) and Databricks. Youll apply best practices in Python and SQL to build scalable and maintainable data transformations, and leverage technologies like LLMs and GenAI to create innovative solutions for real business problems.
This role is ideal for someone who wants technical leadership responsibilities in an AI-first engineering culture - we use LLMs, GenAI, and AI-native development tools as core parts of our daily workflow.
Key Responsibilities:
Act as a technical leader within the team - raise engineering standards, drive strong architectural choices, and improve how we build
Own data products end-to-end: design, development, deployment, monitoring, and iteration
Work closely with senior leadership to translate strategic goals into scalable data solutions
Develop and maintain production ETL/ELT pipelines using DBT (on Spark) and orchestrated workflows in Databricks
Build monitoring, alerting, and testing pipelines to ensure reliability and performance in production
Evaluate and introduce new technologies - including AI-native development tools - and integrate the ones that create real impact
Collaborate with customers and external data providers - gathering requirements and making product decisions.
Mentor team members through code reviews, pairing, and knowledge sharing
Requirements:
4+ years of experience in production-level data engineering or similar roles
Deep proficiency in SQL and Python
Proven track record of owning and scaling production-grade data pipelines, including versioning, testing, and monitoring
Strong understanding of data modeling, normalization/denormalization trade-offs, and data quality management
Experience with the modern data stack: DBT, Databricks, Spark, Delta Lake
Strong analytical skills - ability to design and evaluate data-driven hypotheses and KPIs
Product and business awareness - you think about the impact of what you build, not just the implementation
Preferred Qualifications:
Experience with GenAI and LLM applications - particularly extracting structure from unstructured data at scale
Experience working with external data sources and vendors
Familiarity with Unity Catalog and data governance at scale
Familiarity with Terraform or similar infrastructure-as-code tools
Experience with cost optimization on Databricks (DBU analysis, cluster policies)
Familiarity with cloud-native platforms (AWS preferred)
BSc/BA in Computer Science, Engineering, or a related technical field - or graduation from a top-tier IDF tech unit
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602225
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers - while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Job responsibilities:
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects - from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills:
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602206
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Data Infra Engineer. You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.

Responsibilities

Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 5+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8601803
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Data Engineer to join our R&D organization as part of a backend-oriented team, responsible for building and scaling the core data infrastructure.
In this role, you will design and develop data pipelines that stream and process data directly from production systems. You will play a key role in shaping our data platform, building robust, scalable infrastructure and pipelines using modern technologies, and working hands-on with both new components and existing systems.
Responsibilities
Collaborate as a strong team player within a dynamic, cross-functional environment
Design, develop, and maintain scalable data models, Lakehouse architectures, pipelines, and ETL processes
Enhance data workflows to support efficient real-time and batch processing
Work closely with cross-functional teams to understand data requirements and deliver impactful solutions
Stay up to date with the latest data engineering technologies and best practices, continuously improving our data platform.
Requirements:
6+ years of development experience, including at least 3 years as a Data Engineer
Experience with distributed computing frameworks (e.g., Spark, Flink, EMR)- Must
Experience with Iceberg / Delta Lake / Databricks or similar technologies
Experience designing scalable data storage solutions over object storage (structured and semi-structured data)
Hands-on experience building data pipelines and ingestion systems (batch and/or streaming)
Strong communication skills and ability to work with multiple stakeholders across teams
Proficiency in Python and PySpark- Advantage
Experience in streaming systems and real-time data processing- Advantage
Background in backend engineering or experience working closely with backend teams- Advantage
Experience optimizing data processing performance for cost and efficiency- Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600850
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an ML Engineer / MLOps Tech Lead to promote machine learning engineering excellence. Someone who is passionate about building scalable, high-quality data products and processes, while ensuring production systems maintain strong real-time performance observability.
You will focus on designing and maintaining the core infrastructure that empowers the Machine Learning Engineers working within Data Science product teams. Youll collaborate closely with stakeholders across data science, product, and engineering, playing a pivotal role in driving the business by architecting and enabling the infrastructure for machine learning model development, serving, and lifecycle management-the foundation of our product.
Responsibilities:
Collaborate with product, data science, and engineering teams to solve complex problems, identify trends, and create opportunities through robust ML infrastructure.
End-to-end ML delivery - enabling model performance development, training, validation, testing, and version control.
Build and support monitoring and observability tools - dashboards, alerts, and performance tracking of models in production.
Lead architecture projects such as: Feature Store, Vector / Graph Databases.
Data wrangling - supporting and enabling data requirements for research, training, validation, and testing.
Drive engineering best practices including code and model versioning, CI/CD pipelines, rollout strategies, and disaster recovery procedures.
Requirements:
3+ years of experience as an ML Engineer / MLOps
5+ years of experience as a software engineer or data engineer
2+ years of experience in a technical leadership role (leading engineers or data scientists)
Strong programming skills in Python and SQL
Hands-on experience with MPP frameworks such as Spark, Flink, Ray, Dask or equivalent
Strong analytical and critical thinking skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600846
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
At our company, Israels first fully digital bank, were building the technology that powers a new era of intuitive, transparent, people-first banking. As our platform grows and our services expand, were looking for a team leader to lead a data engineering team.
The team is responsible for ingestion, processing, and serving bronze level data to other analytic teams in the organization.
In this role, you will define the data architecture of our unified Data Lakehouse, own all ETL and streaming operations, define governance processes and implement tools, make sure our data is fresh, accurate and whole always.
Your Day-to-Day
Menor and lead a team of 4-6 data engineers.
Own sensitive data operations, including monitoring, on-call and production operations.
Build and manage all ETL and streaming operations related to the Data Lakehouse.
Develop, maintain, and optimize robust data pipelines and integrations across multiple systems.
Build a platform for other analytic teams to build data products on top of the Data Lakehouse.
Define and implement quality and governance processes to make sure data is fresh, accurate and whole.
Collaborate with engineering, BI, and business teams to translate requirements into scalable data solutions
Work hands-on with data orchestration, transformation, and cloud infrastructure (OCI, Snowflake, AWS)
Support implementation of best practices in data management and observability.
Requirements:
4+ year of direct management of team of 4-6 data engineers.
8+ years in data engineering, data architecture, or similar roles.
Experience in financial systems or fintech (big advantage)
Deep hands-on experience with PostgreSQL, Snowflake, Oracle etc
Strong experience with ETL/ELT, data integration, Kafka, and other streaming solutions (must).
Proven SQL and Python skills (must).
Experience with cloud environments
Strong ownership, problem-solving ability, and communication skills
Comfort working in a fast-paced, multi-system environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600794
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו
ישנן -71 משרות במרכז אשר לא צויינה בעבורן עיר הצג אותן >