דרושים » הנדסה » MLOps engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Backend Engineer to join our MLOps team and help build the infrastructure that powers cutting-edge AI models.
In this role, youll manage the end-to-end MLOps lifecycle, designing event-driven systems that handle massive video data and moving compute-intensive, generative models from research to production.
You'll collaborate closely with AI researchers and video-processing teams to ensure our AI services are scalable, reliable, and performant.
Requirements:
6+ years of production-grade Python development experience.
Strong background in distributed systems: Youve built and debugged complex, event-driven architectures (e.g., Kafka, microservices).
Expertise in Data Engineering at scale: Experience building massive data pipelines and architecting Data Lakes (S3) with compute layers like Athena for large-scale analysis.
Deep understanding of the MLOps lifecycle: Experience taking models from training to deployment, including versioning and performance monitoring.
Experience with containerized environments, microservices, and Kubernetes.
Experience with workflow management frameworks (Temporal, Airflow) and asynchronous programming.
Experience with cloud platforms (AWS preferred) and model-serving frameworks (Triton, VLLM/SGLang, Ray Serve).
A love for exploring new tech and the drive to implement modern frameworks that move the needle.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8589969
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/03/2026
Job Type: Full Time
We're looking for a Senior AI/MLOps Engineer to join a group that specializes in Security and Networking, and specifically ML, AI and agent development. As a Senior AI/MLOps Engineer, youll build and maintain the infrastructure, tools and processes necessary to support the AI lifecycle in a production environment. You will collaborate closely with data scientists, software engineers, security architects and DevOps teams to ensure smooth deployment, modeling and optimization of AI models. This role involves creative problem solving alongside engineering teams, and is pivotal for the continued success of AI networking security.

What youll be doing:

Developing, improving and optimizing scalable infrastructure for handling and deploying security and networking AI models and agents in production, ensuring high availability, scalability, reproducibility, and performance.

Optimizing AI models and agents for performance, scalability, and resource utilization, considering factors such as latency, efficiency, and cost.

Monitoring and deploying agentic systems, LLMs, and ML models in production.

Designing and implementing frameworks/pipelines for AI training, inference, and experimentation.

Collaborating closely with data scientists, security architects and software engineers to operationalize and deploy AI models and agents, including packaging and integration with existing systems. Participate in developing and reviewing code, design documents, use case reviews, and test plan reviews.

Collaborating with DevOps teams to integrate pipelines and workflows into the CI/CD process, ensuring flawless deployments and rollbacks.

Building and maintaining monitoring and alerting systems to proactively identify and resolve issues relating to quality, performance and infrastructure.

Implementing access controls, authentication mechanisms, and encryption standards for AI models and data.

Documenting guidelines, and standard operating procedures for MLOps/AI processes and sharing knowledge with the wider team.

Develop proof-of-concepts for new features.
Requirements:
What we need to see:

BSc/MSc in CS/CE or related field (or equivalent experience).

Strong background in AI with experience deploying and monitoring AI/ML models, LLMs and agents to production systems at scale, including distributed and multi-node environments - at least 5 years of experience.

Proficiency in programming languages such as Python, Java, or Scala, along with experience in using ML/AI frameworks and libraries (e.g. TensorFlow, PyTorch).

Proficiency in microservices architecture, container orchestration, cloud platforms, and scalable infrastructure for training and inference workloads.

Knowledge of inference optimization techniques.

Understanding of build infrastructure and CI/CD tools and practices (e.g. GitLab, GitHub Actions, Jenkins).

You are detail-oriented and care deeply about robust, well tested, high-performance code in production environments.

You are proactive, take full ownership of your deliverables, have a can-do approach, and excellent communication and collaboration skills, able to work effectively in multifunctional teams.

Ways to stand out from the crowd:

Knowledge of network protocols and Linux internals.

Security and networking background, with knowledge of security protocols, network architectures, firewalls, intrusion detection systems, and other relevant security and networking concepts.

Experience deploying and optimizing generative models and agents.

Knowledge of network security principles and practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8586605
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Data Group Tech Lead, Staff Engineer to join our Data Platform group. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role youll:
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
דרישות:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQ המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594845
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This is a great opportunity to be part of one of the fastest-growing infrastructure companies in history, an organization that is in the center of the hurricane being created by the revolution in artificial intelligence.
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8572794
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are redefining cyber risk for SMBs with a cutting-edge Security SaaS product. Were looking for a Software Engineer to join our growing team and help us scale to the next level.

This is a hands-on role where youll collaborate with product managers, data teams, architects, and fellow engineers to deliver innovative, high-impact solutions in a modern, cloud-native environment.

Responsibilities
Design and implement robust, high-throughput event-driven services and complex features in a cloud-based, microservices environment.
Own features end-to-end, from architectural design and estimation through deployment and continuous improvement.
Collaborate closely with cross-functional teams (Product, Data, Core Architecture) to ship high-quality solutions quickly.
Partner with the Product team to convert ambiguous requirements into concrete, well-defined technical specifications.
Maintain high standards of code quality, proactively troubleshoot issues, and ensure smooth operation of new and existing systems.
Provide technical leadership and mentorship, driving code quality standards and contributing to the teams roadmap.
Implement advanced monitoring, distributed tracing, and alerting for system health and incident response.
Requirements:
5+ years of hands-on backend development experience in Python (preferred), Java, or Go
Proficiency with backend frameworks (Flask/FastAPI, Spring Boot, etc.).
Strong experience with RESTful API design and development.
Deep expertise in event-driven architecture and infrastructures such as SNS/SQS, Kafka, Dapr.
Experience with microservices architecture.
Strong expertise in Relational Databases (PostgreSQL, MySQL, etc.)
Experience with NoSQL technologies (Redis, Elasticsearch).
Strong experience with cloud production environments (AWS preferred).
Experience with Docker, and Kubernetes for deployment and production.
Excellent communication skills in Hebrew and English.
Knowledge in workflow orchestration (Orkes, Temporal), cybersecurity and networking knowledge - Advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8567826
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior/ Principal/ Senior Principal Software Engineer at Cortex Cloud, you will serve as a primary technical architect and visionary for our core communication infrastructure. This role is focused on the critical server-side backbone that facilitates high-scale bidirectional communication between our cloud services and client-side applications.
You will be responsible for the architectural integrity of systems that receive massive data inflows from the field and reliably broadcast intelligence back to millions of endpoints. This is a high-impact leadership role requiring a blend of deep technical mastery in distributed systems and the ability to influence technical strategy across the organization.
Key Responsibilities
Architectural Strategy & Vision: Define and drive the multi-year technical roadmap for our server-side communication infrastructure, ensuring the platform remains resilient and performant under extreme load.
High-Scale Communication Infrastructure: Lead the design and implementation of backend systems optimized for receiving high-scale data from client-side apps and distributing data back to a vast ecosystem of endpoints.
Technical Leadership & Influence: Act as a force multiplier by providing technical guidance to multiple engineering teams, aligning them on shared protocols, architectural standards, and communication patterns.
Drive Engineering Excellence: Champion a culture of high engineering rigor, focusing on deep observability, low-latency data distribution, and runtime stability for mission-critical production environments.
Cross-Functional Collaboration: Partner with Product Management, Infrastructure, and Client-Side Engineering teams to evaluate technical trade-offs, mitigate risks, and ensure seamless end-to-end data flow.
Innovation & Prototyping: Spearhead the evaluation of emerging technologies and lead "proof of concept" initiatives for next-generation transport layers and messaging paradigms.
Technical Mentorship: Invest in the growth of Senior and Staff engineers through deep-dive design reviews, code audits, and hands-on pair programming on the most critical paths.
Strategic Customer Engagement: Support the business by leading technical deep dives with strategic customers, translating complex architectural concepts into actionable confidence for our partners.
Requirements:
5+/ 8+/10+ years of software engineering experience with a proven track record of delivering robust, high-scale distributed systems.
Server-Side Mastery: Deep expertise in systems-level programming and modern backend languages (e.g., Go, Python) with a focus on building scalable server-side infrastructure.
Cloud Native Foundations: Extensive experience designing, deploying, and operating large-scale architectures on GCP, AWS, or Azure, including strong knowledge of Kubernetes, Docker and Helm.
Bidirectional Data Flow: Proven ability to architect systems that handle high-concurrency data ingestion and wide-scale data distribution/broadcasting.
Systemic Problem Solving: Demonstrated experience in profiling, debugging, and optimizing complex distributed systems to eliminate performance bottlenecks.
Influence & Communication: Exceptional ability to communicate complex technical concepts to both highly technical peers and non-technical stakeholders.
Preferred Qualifications
Data Platform Expertise: Familiarity with architecting solutions using large-scale data platforms such as BigQuery, MongoDB, and MySQL.
High-Performance Caching: Hands-on experience with in-memory data stores and acceleration technologies like Redis, Dragonfly, or similar high-throughput caching layers.
Event-Driven Architecture: Deep understanding of Event-Driven systems and asynchronous messaging patterns to ensure decoupled and scalable service interactions.
Modern Tooling: Experience leveraging AI-assisted development tools (Gemini, Claude) to optimize the SDLC and automate complex testing/generation tasks.
Advanced Degree: B.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588269
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Software Engineer to join our Decision Engineering team. The group is responsible for the real-time, low-latency infrastructure that powers our fraud decisions and external APIs.
Our systems process billions of requests every day, ensuring high availability, security, and performance at global scale.
In this role, youll work on core backend components such as our decision engine, ingestion and enrichment pipelines, schema management systems, and self-serve API platform. The software you build will power critical business decisions and directly serve some of the worlds largest merchants.
This is a high-impact, high-ownership position for an engineer who thrives on solving complex distributed systems challenges, cares deeply about production-grade quality, and wants to shape the foundation of our decisioning platform.
What you'll be doing:
Design, build, and scale backend systems that power our real-time decisioning and APIs.
Own projects end-to-end - from design and implementation to production rollout and monitoring.
Ensure systems are low-latency, fault-tolerant, and high-throughput across distributed environments.
Enhance observability, reliability, and developer experience through strong operational and tooling practices.
Collaborate with Product, analysts, data scientists, and infrastructure teams to drive innovation across our decision ecosystem.
Participate in technical discussions and customer interactions, providing expertise and clear communication when supporting enterprise integrations.
Requirements:
5+ years of experience building backend systems in large-scale production environments
Strong programming skills in Python, Java, Kotlin, or Node.js
Hands-on experience with cloud-native technologies (AWS, Kubernetes, Docker)
Proven ability to design and maintain high-scale distributed systems
Strong sense of ownership, autonomy, and accountability
Excellent communication skills, with the ability to explain complex systems clearly to both technical and non-technical audiences - including direct collaboration with customers worldwide
It'd be cool if you also have:
Experience with API Gateway architectures, schema/versioning strategies, or platformization efforts
Familiarity with real-time data processing frameworks (e.g., Flink, Storm) and resilience patterns
Background working alongside data science or machine learning teams
Contributions to developer platforms, infrastructure services, or internal tools improving engineering velocity.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588932
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an Experienced Data Engineer to join our marketing team and take end-to-end ownership of our data platform and production data pipelines. In this role, you will be responsible for building robust, scalable, and observable data systems that power analytics, reporting, and downstream business use cases. You will work deeply hands-on with data infrastructure, modeling, and orchestration, and act as a key technical partner to Marketing, Sales Product and Business and Finance teams.
This role suits someone who enjoys working close to the metal, designing systems that scale, and solving ambiguous data problems in a dynamic startup environment. You will play a critical role in shaping how data flows through the company, setting engineering standards, and ensuring data is trustworthy, performant, and ready for growth.
What You'll Do:
Design, build, and maintain scalable, reliable data pipelines and data warehouse architectures to support analytics and business intelligence needs.
Own the end-to-end ETL/ELT processes - ingesting data from internal and external sources, transforming it, and making it analytics-ready.
Model and optimize data structures (fact tables, dimensions, semantic layers) to support performant querying and reporting.
Ensure high standards of data quality, integrity, observability, and reliability across all data assets.
Partner closely with Analytics, Product, Marketing, and Finance teams to understand data requirements and deliver robust data solutions.
Implement monitoring, alerting, and testing frameworks to proactively identify data issues.
Optimize warehouse performance and cost efficiency (query optimization, partitioning, clustering, etc.).
Identify gaps in data collection and work with engineering teams to improve instrumentation and data availability.
Support experimentation and analytics use cases by enabling clean, trustworthy datasets for A/B testing and analysis.
Document data models, pipelines, and best practices to support scale and knowledge sharing.
Requirements:
Bachelors or Masters degree in Computer Science, Data Engineering, Software Engineering, or a related technical field.
3-5 years of hands-on experience as a Data Engineer, preferably in a SaaS or technology-driven environment.
Strong experience designing and maintaining data warehouses (e.g., Snowflake, BigQuery, Redshift).
Proven expertise with ETL/ELT tools and frameworks (e.g., Airflow, dbt, Talend, SSIS, Informatica, or similar).
Advanced SQL skills and solid proficiency in Python (or similar languages) for data processing and orchestration.
Strong understanding of data modeling, warehousing best practices, and analytics engineering concepts.
Experience integrating data from business systems such as Salesforce, HubSpot, or other SaaS platforms.
Familiarity with SaaS metrics and business concepts (ARR, churn, LTV, CAC) - from a data modeling perspective.
Experience supporting BI tools and analytics consumers (Tableau, Looker, Power BI, etc.).
Strong problem-solving skills, attention to detail, and a passion for building reliable data foundations.
Excellent communication skills and the ability to collaborate across technical and non-technical teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563348
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560110
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Full Stack Engineer, Subscriptions Team.
As a Senior Full Stack Engineer on this team, youll work across the stack - designing robust backend services and building polished, data-driven UIs that help customers and internal stakeholders get value quickly.
What Youll Do:
Lead technical design and implementation of backend projects, owning end-to-end features from architecture to deployment
Design, implement, and maintain scalable microservices and event-driven systems that power our subscription and billing engine
Design and maintain asynchronous flows using queues to handle high-concurrency usage metering and analytics
Contribute to the frontend domain by building and maintaining UIs that expose subscription data to customers
Collaborate with Product, Data, and Customer Success teams to support product-led growth (PLG) and business goals
Take ownership of production systems, including monitoring, troubleshooting, and reliability improvements
Write clean, testable, and maintainable code, and participate in thoughtful code reviews
Simplify integrations and workflows to reduce Mean Time to Value (MTTV) for internal stakeholders and customers
Requirements:
5+ years of experience as a Full Stack Engineer working on production systems
Strong backend engineering experience with:
Node.js and TypeScript
Microservices architecture
RESTful APIs and event-driven systems
Working knowledge of React and TypeScript, with the ability to contribute to frontend components and data-driven UIs, including:
Building complex, data-heavy, performant web applications
Translating UX and product requirements into clean component architectures
Experience with relational databases, including data modeling, ery optimization, and troubleshooting
Proven experience deploying and operating services in cloud environments (AWS, GCP, or Azure)
Experience with containerized workloads (Docker, Kubernetes, or similar)
Hands-on experience with monitoring, logging, and alerting tools (e.g., Datadog, Coralogix, Grafana)
Strong understanding of system design, distributed systems, scalability, and reliability
Ability to debug complex production issues across the stack and drive them to resolution
Comfortable working in a fast-paced environment with multiple priorities
Experience with asynchronous processing and queues
Nice to Have:
Experience with NestJS
Experience with billing, payments, or subscription platforms
Experience building internal platforms or tooling for engineering teams
Background in analytics, BI, or customer-facing data products
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569158
סגור
שירות זה פתוח ללקוחות VIP בלבד