דרושים » הנדסה » Data Platform Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Were growing and looking to hire a Data Platform Engineer who embodies our core values: People First, Customer Obsession, Strive for Excellence, and Integrity.
As a Data Platform Engineer, Your Impact Will Be:
Platform Modernization: Shape and develop the core technology of our Data Platform, moving from design to implementation to support AI research and business intelligence.
Scalable Pipeline Engineering: Design and build complex, distributed data pipelines that ingest, manipulate, and move data from a wide variety of sources at massive scale.
Software Excellence: Build and maintain high-performance microservices and APIs that interface with our data ecosystem, ensuring low-latency and high reliability.
Cross-Functional Collaboration: Partner closely with Software Engineering, AI Research, and Product Management to transform abstract data needs into real-world customer value.
Data Reliability: Implement best practices for data management, quality assurance, and security within our ecosystem.
ML & AI Enablement: Integrate and manage robust data flows that directly power our ML and LLM pipelines in production environments.
Requirements:
3+ years as a Data or Software Engineer in cloud-native, high-scale environments
Strong Proficiency in Python and Microservices architecture
Expertise in Spark and distributed data processing at scale
Deep Knowledge of AWS ecosystem (S3, Glue, RDS, EMR)
Strong SQL skills and experience optimizing complex query executions
Proven Experience with data modeling for high-scale, large volume data sets
Experience with modern orchestration tools like Airflow
Experience with Real-time streaming (Kafka, Kinesis, or Flink)
Experience supporting ML/LLM pipelines and Vector Databases (Advantage)
Experience with Databricks or SingleStore (Advantage)
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602071
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
The Falcon Cloud Security team is looking for a hands-on Engineering Manager / Team Lead to lead the development of Agentic Workflows - a transformative initiative aimed at automating complex security operations using AI-native agents. You will lead a team of talented engineers while remaining deeply technical, helping architect and build autonomous systems that don't just alert but actively reason, investigate, and remediate security risks across multi-cloud environments.
As a player-coach, you will help shape the "brain" of our cloud security platform, guiding both the people and the technology that leverages large-scale data and AI-driven logic to help customers discover misconfigurations, prioritize risks, and automate defensive actions at scale.

What You'll Do:

Lead & Grow a Team: Manage, mentor, and develop a team of backend engineers, fostering a high-trust, high-performance culture. Conduct regular 1:1s, support career growth, and drive hiring to scale the team.

Stay Hands-On: Remain an active technical contributor - designing, reviewing, and writing production-quality code alongside your team. Lead by example and maintain a strong engineering presence.

Design & Architect: Drive backend engineering efforts to build autonomous agentic frameworks, guiding the team from rapid prototypes to large-scale production applications.

Develop Core Logic: Contribute to and oversee the development of decision-making engines and workflows that allow security agents to interact with cloud APIs (AWS, Azure, GCP) and internal data streams.

Data Integration: Guide the development of high-performance data integrations and streaming services (Kafka) to feed real-time security data into agentic models for continuous reasoning.

Scale Systems: Architect and oversee distributed systems capable of processing billions of security events to provide actionable posture intelligence and automated remediation.

Drive Cross-Functional Collaboration: Partner with Product, Design, and peer engineering teams in a "startup-like" environment to define and deliver new platform capabilities with speed and quality.

Raise the Bar: Champion engineering excellence, new technologies, and best practices across the team and broader engineering organization.
Requirements:
Experience: 8+ years of backend engineering experience, with at least 2 years in an engineering leadership role (Tech Lead, Staff Engineer, or Engineering Manager). Strong proficiency in Go and Python.

People Leadership: Demonstrated ability to hire, mentor, and develop engineers at varying levels. Comfortable balancing technical contribution with team management responsibilities.

AI/LLM Experience: Prior experience building workflows powered by LLMs, RAG, or autonomous agents. Strong understanding of agent frameworks and key components including model integration, tool calling patterns, and Model Context Protocol (MCP).

Cloud Expertise: Deep knowledge of at least two major cloud providers (AWS, Azure, or GCP).

Systems Engineering: Strong understanding of distributed systems, scalability, concurrency, and resilient architecture.

Data Proficiency: Solid experience with data modeling, RDBMS (SQL), and distributed caching solutions like Redis.

Education: BS/MS in Computer Science or equivalent professional experience in data structures and algorithms.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598636
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
The Falcon Cloud Security team is looking for a hands-on Engineering Manager / Team Lead to lead the development of Agentic Workflows - a transformative initiative aimed at automating complex security operations using AI-native agents. You will lead a team of talented engineers while remaining deeply technical, helping architect and build autonomous systems that don't just alert but actively reason, investigate, and remediate security risks across multi-cloud environments.
As a player-coach, you will help shape the "brain" of our cloud security platform, guiding both the people and the technology that leverages large-scale data and AI-driven logic to help customers discover misconfigurations, prioritize risks, and automate defensive actions at scale.

What You'll Do:

Lead & Grow a Team: Manage, mentor, and develop a team of backend engineers, fostering a high-trust, high-performance culture. Conduct regular 1:1s, support career growth, and drive hiring to scale the team.

Stay Hands-On: Remain an active technical contributor - designing, reviewing, and writing production-quality code alongside your team. Lead by example and maintain a strong engineering presence.

Design & Architect: Drive backend engineering efforts to build autonomous agentic frameworks, guiding the team from rapid prototypes to large-scale production applications.

Develop Core Logic: Contribute to and oversee the development of decision-making engines and workflows that allow security agents to interact with cloud APIs (AWS, Azure, GCP) and internal data streams.

Data Integration: Guide the development of high-performance data integrations and streaming services (Kafka) to feed real-time security data into agentic models for continuous reasoning.

Scale Systems: Architect and oversee distributed systems capable of processing billions of security events to provide actionable posture intelligence and automated remediation.

Drive Cross-Functional Collaboration: Partner with Product, Design, and peer engineering teams in a "startup-like" environment to define and deliver new platform capabilities with speed and quality.

Raise the Bar: Champion engineering excellence, new technologies, and best practices across the team and broader engineering organization.
Requirements:
Experience: 8+ years of backend engineering experience, with at least 2 years in an engineering leadership role (Tech Lead, Staff Engineer, or Engineering Manager). Strong proficiency in Go and Python.

People Leadership: Demonstrated ability to hire, mentor, and develop engineers at varying levels. Comfortable balancing technical contribution with team management responsibilities.

AI/LLM Experience: Prior experience building workflows powered by LLMs, RAG, or autonomous agents. Strong understanding of agent frameworks and key components including model integration, tool calling patterns, and Model Context Protocol (MCP).

Cloud Expertise: Deep knowledge of at least two major cloud providers (AWS, Azure, or GCP).

Systems Engineering: Strong understanding of distributed systems, scalability, concurrency, and resilient architecture.

Data Proficiency: Solid experience with data modeling, RDBMS (SQL), and distributed caching solutions like Redis.

Education: BS/MS in Computer Science or equivalent professional experience in data structures and algorithms.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598630
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Software Engineer (Data Platforms) to join the Users & Integrations team within our companys Intelligence Group. This role is built for an experienced engineer who thrives on solving complex backend challenges and scaling data pipelines.
In this role, you will take ownership of crucial user data integrations and architect the sophisticated matching logic that powers our platform from data ingestion and transformation to delivery. You will work extensively with large-scale data pipelines, translate complex algorithms into high-performance production code, and tackle massive scalability challenges to enhance the data experience for our companys customers
Where does this role fit in our vision?
Every role at our company is designed with a clear purpose. At our company, data is everything; its at the heart of everything we do. The Intelligence Group is responsible for shaping the experience of hundreds of thousands of users who rely on our data daily.
The Users Team is the engine behind our companys data connectivity, handling massive-scale user data integrations and engineering complex entity-matching logic. By translating millions of data signals and advanced algorithms into high-performance pipelines, we ensure users receive highly accurate, tailored data - optimizing their overall experience while driving the core KPIs of our Intelligence Group.
What will you be responsible for?
Designing, building, and maintaining robust, scalable ETL/ELT data pipelines and integration solutions within our companys Databricks-based environment.
Implementing and optimizing algorithms for data processing and entity resolution with a strong emphasis on delivering high-quality, high-throughput data.
Deploying data infrastructure leveraging technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
Designing innovative data solutions that support millions of data points, at high performance and massive scale.
Requirements:
What we look for:
3+ years of software engineering experience building scalable backend systems
Experience scaling big data pipelines, complex data integrations, and robust data infrastructure.
Expertise in big data technologies, including Spark (or Databricks), Kafka (or other real-time streaming tools), and workflow orchestrators like Airflow.
Experience using GenAI tools for software development (such as Cursor, Claude Code, Codex, etc).
A strong builder mindset, with experience turning ideas into working solutions
Algorithmic experience, including developing and optimizing machine learning models and implementing advanced data algorithms.
Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or comparable cloud environments (Azure/GCP).
Expertise in extracting, ingesting, and transforming large datasets efficiently.
A passion for sharing knowledge, fostering a supportive engineering culture, and engaging in collaborative problem-solving with your peers.
Bonus Points:
Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595416
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior/ Principal/ Senior Principal Software Engineer at Cortex Cloud, you will serve as a primary technical architect and visionary for our core communication infrastructure. This role is focused on the critical server-side backbone that facilitates high-scale bidirectional communication between our cloud services and client-side applications.
You will be responsible for the architectural integrity of systems that receive massive data inflows from the field and reliably broadcast intelligence back to millions of endpoints. This is a high-impact leadership role requiring a blend of deep technical mastery in distributed systems and the ability to influence technical strategy across the organization.
Key Responsibilities
Architectural Strategy & Vision: Define and drive the multi-year technical roadmap for our server-side communication infrastructure, ensuring the platform remains resilient and performant under extreme load.
High-Scale Communication Infrastructure: Lead the design and implementation of backend systems optimized for receiving high-scale data from client-side apps and distributing data back to a vast ecosystem of endpoints.
Technical Leadership & Influence: Act as a force multiplier by providing technical guidance to multiple engineering teams, aligning them on shared protocols, architectural standards, and communication patterns.
Drive Engineering Excellence: Champion a culture of high engineering rigor, focusing on deep observability, low-latency data distribution, and runtime stability for mission-critical production environments.
Cross-Functional Collaboration: Partner with Product Management, Infrastructure, and Client-Side Engineering teams to evaluate technical trade-offs, mitigate risks, and ensure seamless end-to-end data flow.
Innovation & Prototyping: Spearhead the evaluation of emerging technologies and lead "proof of concept" initiatives for next-generation transport layers and messaging paradigms.
Technical Mentorship: Invest in the growth of Senior and Staff engineers through deep-dive design reviews, code audits, and hands-on pair programming on the most critical paths.
Strategic Customer Engagement: Support the business by leading technical deep dives with strategic customers, translating complex architectural concepts into actionable confidence for our partners.
Requirements:
5+/ 8+/10+ years of software engineering experience with a proven track record of delivering robust, high-scale distributed systems.
Server-Side Mastery: Deep expertise in systems-level programming and modern backend languages (e.g., Go, Python) with a focus on building scalable server-side infrastructure.
Cloud Native Foundations: Extensive experience designing, deploying, and operating large-scale architectures on GCP, AWS, or Azure, including strong knowledge of Kubernetes, Docker and Helm.
Bidirectional Data Flow: Proven ability to architect systems that handle high-concurrency data ingestion and wide-scale data distribution/broadcasting.
Systemic Problem Solving: Demonstrated experience in profiling, debugging, and optimizing complex distributed systems to eliminate performance bottlenecks.
Influence & Communication: Exceptional ability to communicate complex technical concepts to both highly technical peers and non-technical stakeholders.
Preferred Qualifications
Data Platform Expertise: Familiarity with architecting solutions using large-scale data platforms such as BigQuery, MongoDB, and MySQL.
High-Performance Caching: Hands-on experience with in-memory data stores and acceleration technologies like Redis, Dragonfly, or similar high-throughput caching layers.
Event-Driven Architecture: Deep understanding of Event-Driven systems and asynchronous messaging patterns to ensure decoupled and scalable service interactions.
Modern Tooling: Experience leveraging AI-assisted development tools (Gemini, Claude) to optimize the SDLC and automate complex testing/generation tasks.
Advanced Degree: B.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588269
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
06/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
Requirements:
3+ years of hands-on DevOps experience building, shipping, and operating production systems.
Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569980
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As our Senior Data Engineer you will handle Big Data in real time, and become the owner and gatekeeper of all data and data flows within the company. You will bring data ingenuity and technological excellence while gaining a deep business understanding.

This is an amazing opportunity to join a multi-disciplinary A-team while working in a fast-paced, modern cloud, data-oriented environment.



What Youll Do:

Implement robust, reliable and scalable data pipelines and data architecture.
Own and develop DWH (Petabytes of data!).
Own the entire data development process, including business knowledge, methodology, quality assurance, and monitoring.
Collaborate with cross-functional teams to define, design, and ship new features.
Continuously discover, evaluate, and implement new technologies to maximize development efficiency.
Develop tailormade solutions as part of our data pipelines.
Lead complex Big Data projects and build data platforms from scratch.
Work on a high-scale, real-time, real-world business-critical data stores and data endpoints.
Implement data profiling to identify anomalies and maintain data integrity.
Work in a results-driven, high-paced, rewarding environment.
Requirements:
5+ years experience as a Data Engineer.
Good working knowledge of Google Cloud Platform (GCP).
Experience using AI-assisted tools or automation to improve data development, monitoring, or debugging workflows.
Strong experience in SQL and Python.
Experience with high volume ETL/ELTs tools and methodologies - both batch and real-time processing.
Understanding of how to build robust and reliable solutions.
The ability to understand the business impact of the data engineering tasks.
Hands-on experience in writing complex queries and optimizing them for performance.
Able to understand complex data and data flows.
Have a strong analytical mind with proven problem-solving abilities.
Ability to manage multiple tasks and drive them to completion.
Independent and proactive.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8576179
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Hands-on Architect to design, build, and evolve the core of next-generation AI cybersecurity platform.This is not an ivory-tower role; it is for a builder at heart.
You will write code, build functional prototypes, and own critical services from inception to production.
This role demands a unique blend of deep, code-level execution with a broad, system-wide architectural vision that spans our entire data, cloud, and AI stack.
Responsibilities:
Rapidly code and build functional proofs of concept (PoCs) and prototypes to explore and validate new architectures, data platforms, data processing frameworks, and GenAI capabilities.
Validate technical feasibility, scalability, and business impact through working software, not just diagrams or documents.
Partner closely with engineering, product and data science teams to translate emerging technologies into production-ready, scalable systems.
Lead architecture design reviews, proactively identify scalability and performance bottlenecks, and embed security principles into the design process from day one.
Serve as a technical leader and mentor, elevating the team's skills through pair programming, in-depth code reviews, and deep-dive sessions on system design and software craftsmanship.
Requirements:
Seniority: 7+ years in a senior technical leadership role (e.g., Principal Architect, Staff/Lead Engineer). Your track record as a 'builder' and system designer is more important than your title.
System-Wide View: Proven experience architecting, building, and operating large-scale, distributed, and data-intensive systems (cybersecurity preferred).
Hands-on Cloud & Platform: Deep, hands-on expertise with at least one major cloud (AWS, GCP, or Azure) and mastery of Kubernetes and container orchestration.
Data Systems Master: Strong background in modern data platforms, real-time event processing, data pipelines and Complex Event Processing.
Pragmatic AI/GenAI: Proven experience integrating GenAI systems (e.g., RAG, agentic frameworks, LLM orchestration) into a production environment.
Programming & Tools:
Strong programming proficiency in Python, Go, or Java.
Hands-on experience with Kafka, Flink, OpenSearch, Apache Iceberg, and related ecosystem tools.
Familiarity with modern CI/CD workflows, Kubernetes, observability stacks, and infrastructure-as-code practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8568876
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
looking for a Data Engineer to help build and scale our analytics data infrastructure. In this role, you will work closely with analysts and business stakeholders to design reliable data models and support the development of a centralized semantic layer used across the company.

You will play a key role in improving the structure, reliability, and usability of our data stack. This includes building and maintaining dbt models, supporting data pipelines, and ensuring analysts have access to clean, well-documented, and consistent data.

This role is ideal for someone who enjoys working at the intersection of data engineering and analytics - translating business needs into scalable data models and enabling teams to move faster with trusted data.

Responsibilities

Design and implement data models that support analytics across key business domains such as GTM, CX, and Finance
Build and maintain transformation workflows using dbt
Work closely with analysts to translate business questions into scalable and reusable data models
Help define and implement a structured semantic layer that enables consistent metrics across the company
Improve the reliability and clarity of the analytics data stack by centralizing logic into well-designed data models
Support the ingestion and transformation of data from various sources using tools such as Fivetran and Airbyte
Contribute to improving data quality, monitoring, and documentation practices
Help establish best practices for analytics modeling and data usage across teams
Actively leverage AI tools (e.g. Cursor, LLM-based assistants) to improve development speed, data modeling, and data workflows
Requirements:
2-4 years of experience in bi/data engineering, analytics engineering or a similar role.
Strong SQL skills and experience working with modern data warehouses.
Experience building and maintaining data models for analytics.
Familiarity with modern data stack tools such as dbt, Snowflake/Bigquery, Fivetran/Rivery, or similar.
Experience collaborating with analysts or BI teams.
Familiarity with Python for data-related tasks (scripting, automation, or tooling).
Hands-on experience using AI tools (e.g. Cursor, LLMs) as part of day-to-day development workflows.
Strong problem-solving skills and the ability to work in evolving data environments.
Clear communicator who can work effectively with both technical and non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595374
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Software Engineer (Data Engineering). You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.

Responsibilities

Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 5+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8601800
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

Responsibilities
Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
Take part in developing agentic capabilities.
Mentor, support, and guide junior team members, sharing expertise and fostering their professional development.
Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
Apply and be responsible for best practices in data security, integrity, and performance across all systems.
Requirements:
6+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
Proven track record in designing, developing, and deploying complex data applications.
Hands-on experience with orchestration and processing tools such as Apache Airflow and Apache Spark.
Deep experience with public cloud platforms, and expertise in cloud-based data storage and processing.
Experience working with Docker and Kubernetes.
Hands-on experience with CI tools such as GitHub Actions.
Bachelors degree in Computer Science, Information Technology, or a related field - or equivalent practical experience.
Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
Excellent communication skills and a strong team player, capable of working cross-functionally.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569768
סגור
שירות זה פתוח ללקוחות VIP בלבד