דרושים » ניהול ביניים » MLOps Engineer - AI Infra Group

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
15/01/2026
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
שם חברה חסוי
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
It starts with you - an engineer driven to build resilient, automated infrastructure that enables teams to move fast with confidence. You care about operational excellence, developer experience, and reliability at scale. Youll architect and operate the compute and networking infrastructure that powers our AI platform - from CI/CD pipelines to Kubernetes clusters to observability systems - across cloud and on-prem environments.
If you want to build infrastructure that powers mission-critical AI systems at national scale, join our companys mission - this role is for you.
The Responsibilities
Architect and operate Kubernetes-based infrastructure across AWS and on-prem environments, ensuring high availability, security, and performance.
Design and maintain CI/CD pipelines for application and service deployments with automated testing, security scanning, and rollback capabilities.
Drive infrastructure-as-code practices for compute and networking - building reproducible, auditable, and version-controlled infrastructure.
Own reliability and incident response - establish SLOs, build alerting systems, lead incident resolution, and drive post-incident improvements.
Enable AI-native operations - support agentic deployment pipelines, self-healing infrastructure, and secure sandboxing for model experimentation.
Build and maintain observability systems - metrics, logging, tracing, and dashboards that provide visibility into system health.
Optimize infrastructure cost and performance - right-size resources, implement auto-scaling, and identify efficiency opportunities.
Collaborate with Engineering, Data Platform, Data Engineering, and Security teams to align infrastructure with platform needs.
Shape infrastructure characteristics that support data freshness, correctness, and low-latency pathways for AI training/inference, retrieval, and agentic workflows.
Contribute paved-road tooling - reusable CI/CD patterns for services, IaC modules for compute and networking, and runbooks - that streamline delivery across teams.
Collaborate with Engineering, Data Platform, Data Engineering, Security, Product, AI/ML, Data Science, and Analytics to anticipate and meet cross-functional needs.
Requirements:
6+ years in DevOps, SRE, or infrastructure engineering, with hands-on experience building and operating infrastructure at scale.
Container orchestration - Kubernetes (EKS, on-prem), Helm, service mesh technologies like Istio or Linkerd
Cloud & infrastructure - AWS services (EC2, EKS, S3, IAM, VPC, Lambda), hybrid cloud architectures, on-prem infrastructure
Infrastructure-as-Code - Terraform, Pulumi, or CloudFormation; GitOps practices with ArgoCD or Flux
CI/CD - GitHub Actions, GitLab CI, Jenkins, or similar; artifact management, deployment strategies (blue-green, canary)
Observability - Prometheus, Grafana, ELK/OpenSearch, Datadog, or similar; distributed tracing, log aggregation, alerting
Security & compliance - Secrets management (Vault, AWS Secrets Manager), network security, compliance automation
Scripting & automation - Python, Bash, Go; configuration management with Ansible or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561434
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Machine learning operations engineer
Your Mission:
As an MLOps Engineer, your mission is to design, build, and operate the platforms that power our machine learning and generative AI products spanning real-time use cases such as large-scale fraud scoring, MCP & agentic workflows support. Youll create reliable CI/CD for models and Agents, robust data/feature pipelines, secure model serving, and comprehensive observability. You will also support our agentic AI ecosystem and Model Context Protocol (MCP) services so that models can safely use tools, data, and actions across.
You will partner closely with Data Scientists, Data/Platform Engineers, Product, and SRE to ensure every model from classic ML to LLM/RAG agents moves from prototype to production with strong reliability, governance, cost efficiency, and measurable business impact.
Responsibilities:
Operate & Develop ML/LLM platforms on Kubernetes + cloud (Azure; AWS/GCP ok) with Docker, Terraform, and other relevant tools
Manage object storage, GPUs, and autoscaling for training & low-latency model serving
Manage cloud environment, networking, service mesh, secrets, and policies to meet PCI-DSS and data-residency requirements
Build end-to-end CI/CD for models/agents/MCP tooling (versioning, tests, approvals)
Deliver real-time fraud/risk scoring & agent signals under strict latency SLOs.
Maintain MCP servers/clients: tool/resource definitions, versioning, quotas, isolation, access controls
Integrate agents with microservices, event streams, and rule engines; provide SLAs, tracing, and on-call runbooks
Measure operational metrics of ML/LLM (latency, throughput, cost, tokens, tool success, safety events)
Enforce governance: RBAC/ABAC, row-level security, encryption, PII/secrets management, audit trails.
Partner with DS on packaging (wheels/conda/containers), feature contracts, and reproducible experiments.
lead incident response and post-mortems.
Drive FinOps: right-sizing, GPU utilization, batching/caching, budget alerts.
Requirements:
4+ years in DevOps/MLOps/Platform roles building and operating production ML systems (batch and real-time)
Strong hands-on with Kubernetes, Docker, Terraform/IaC, and CI/CD
Practical experience with Spark/Databricks and scalable data processing
Proficiency in Python & Bash
Ability to operate DS code and optimize runtime performance.
Experience with model registries (MLflow or similar), experiment tracking, and artifact management.
Production model serving using FastAPI/Ray Serve/Triton/TorchServe, including autoscaling and rollout strategies
Monitoring and tracing with Prometheus/Grafana/OpenTelemetry; alerting tied to SLOs/SLAs
Solid understanding of PCI-DSS/GDPR considerations for data and ML systems
Experience with the Azure cloud environment is a big plus
Operating LLM/agent workloads in production (prompt/config versioning, tool execution reliability, fallback/retry policies)
Building/maintaining RAG stacks (indexing pipelines, vector DBs, retrieval evaluation, hybrid search)
Implementing guardrails (policy checks, content filters, allow/deny lists) and human-in-the-loop workflows
Experience with feature stores - Qwak Feature Store, Feast
A/B testing for models and agents, offline/online evaluation frameworks
Payments/fraud/risk domain experience; integrating ML outputs with rule engines and operational systems - Advantage
Familiarity with Databricks Unity Catalog, dbt, or similar tooling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8550121
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
08/03/2026
חברה חסויה
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
We are looking for a hands-on Tech Lead to join the Core Platform team within ML. Our engineering teams build the foundational systems behind global artifact storage, replication, and distribution - and increasingly power the next generation of AI/ML operations and governance.Our platform is the backbone for ML workloads: managing model binaries, versioning, and scalable runtime environments for ML and AI applications. This role combines deep distributed systems with modern ML infrastructure challenges such as high-throughput inference, safe model rollouts, and multi-cloud GPU efficiency. You will also help evolve core libraries and developer-facing tools, including logging, observability, and visibility components.
As a senior technical leader, you will influence architecture across squads, lead complex development efforts, and remain heavily hands-on.
As a Tech Lead in Core Platform in you will
Design and evolve components for managing and distributing ML/AI models and artifacts at scale
Extend the platform to support reliable, high-performance inference and training workflows
Lead cross-team technical initiatives and serve as a reference for distributed systems and ML infra design
Write maintainable, high-quality code in performance-critical areas.
Mentor engineers and drive strong engineering practices
Collaborate with adjacent teams to ensure seamless end-to-end ML platform behavior
Improve the reliability, efficiency, and observability of core services
Requirements:
7+ years building large-scale backend or distributed systems
Strong foundation in distributed systems (consistency, replication, concurrency, fault tolerance)
Proficiency in Java / Go or similar languages
Hands-on experience with high-performance, scalable, and reliable systems
Ability to lead design discussions and influence technical direction across teams
Curiosity and willingness to work with ML systems and workload patterns
Experience with Kubernetes, container orchestration, or cloud-native infrastructure
Thrive in a collaborative, ownership-driven engineering culture
Bonus Points
Experience with ML model serving, vector DBs, model versioning, or GPU orchestration
Background in secure software supply chain workflows
Strong performance debugging and optimization skills
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8571673
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join our companys AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities that will drive our companys future security solutions.
We foster a hands-on, research-driven culture where youll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.
Key Responsibilities
Your Impact & Responsibilities
As a Data Engineer - AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.
You will:
Own data pipelines for LLM training and evaluation
Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
Drive data augmentation and synthetic data generation
Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
Build tagging, labeling and annotation workflows
Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
Ensure data quality, observability and governance
Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
Optimize training data flows for efficiency and cost
Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
Build and maintain data infrastructure for LLM workloads
Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
Collaborate closely with ML Research Engineers and security experts
Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use.
דרישות:
What You Bring
3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you dont need to be the primary model researcher, but you understand what the models need from the data).
Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks.
Nice to Have המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8541065
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
It starts with you - a senior ML engineer responsible for building, training, evaluating, and operating machine learning systems in production. The role focuses on data pipelines, model training, experimentation, evaluation, and scalable deployment.
If you want to grow your skills building AI products for mission-critical AI, join our companys mission - this role is for you.
The Responsibilities
Design, train, and evaluate ML models for production use.
Build and maintain data pipelines for training, validation, and inference.
Own experimentation workflows: feature engineering, training runs, and comparison.
Implement model evals, monitoring, and drift detection.
Package and deploy models to production systems.
Optimize training and inference performance, cost, and reliability.
Collaborate with data, platform, and product teams.
Mentor engineers and promote ML engineering best practices.
Requirements:
4+ years software engineering experience with 2+ years applied ML in production.
Strong foundations in machine learning, statistics, and data analysis.
Hands-on experience with model training frameworks (e.g., PyTorch, TensorFlow, JAX).
Experience with distributed training and large-scale datasets.
Experience building data pipelines, feature engineering, and dataset versioning.
Proven experience designing and operating ML evals, experiment tracking, and monitoring.
Familiarity with feature stores, model registries, and ML lifecycle management.
Experience with model serving patterns and production deployment.
Proficiency in Python and strong system design skills.
Experience deploying ML systems on Kubernetes or similar platforms.
Familiarity with GPU acceleration and performance optimization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8561447
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
06/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
Requirements:
3+ years of hands-on DevOps experience building, shipping, and operating production systems.
Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569980
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior DevOps Engineer with strong engineering skills and a product mindset, who is passionate about platform engineering, developer experience (DevEx), and AI-assisted automation.
This role is ideal for someone who enjoys building scalable internal platforms and paved roads that empower R&D teams to independently take services from design to production. You will focus on reducing friction, increasing developer autonomy, and embedding best practices through automation and tooling.
Location: Tel Aviv (Hybrid)
About Us
we are a global leader in cybersecurity, delivering advanced security solutions that protect organizations worldwide.
The Harmony SASE rocket is building a cloud-native, high-scale security platform that enables secure connectivity for the modern, distributed workforce.
Our DevOps Platform team supports Harmony SASE R&D by providing the infrastructure, CI/CD, observability, and tooling that allow teams to move fast while operating safely in production. We are heavily investing in Platform Engineering, DevEx, and AI-assisted operations to scale our engineering velocity and reliability.
Key Responsibilities
Design and build platform capabilities and self-service tooling that enable R&D teams to deploy and operate services independently.
Develop and maintain Infrastructure as Code and deployment patterns for large-scale cloud environments.
Build and evolve CI/CD pipelines and automation using GitHub Actions and cloud-native services.
Improve developer experience, observability, and operational readiness across production systems.
Explore and integrate AI-driven automation and intelligent tooling into DevOps and platform workflows.
Collaborate with R&D and architecture teams to support new services from design to production.
Requirements:
6+ years of experience in DevOps / Platform / Infrastructure Engineering, including ownership of large-scale production environments.
Experience building infrastructure solutions for high-scale SaaS systems.
Strong hands-on experience with Infrastructure as Code (Terraform, Terragrunt, or Pulumi).
Strong programming skills in Python or Go.
Experience designing and maintaining CI/CD pipelines, preferably with GitHub Actions.
Strong experience with AWS, including services such as ECS, EKS, Lambda, and API Gateway.
Solid understanding of microservices architecture, Linux systems, and cloud networking.
Experience with monitoring and logging tools such as Datadog, Prometheus, and Grafana.
Advantages
Experience with platform engineering, internal developer platforms, or DevEx initiatives.
Hands-on experience integrating AI-assisted automation or tooling into engineering workflows.
Experience with HashiCorp tools (Vault, Consul, Nomad).
Familiarity with configuration management tools (Ansible, Chef).
Strong networking fundamentals (DNS, HTTP/S, proxies, CDN).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8540398
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Were looking for a Senior Software Engineer to join the AZ Team in Tel Aviv - a group of passionate developers building the secure, scalable backbone of our Customer Experience (CX) Platform.

As a key member of the AZ Team, youll play a pivotal role in shaping the foundations of our CX Platform - driving features and architectural decisions from concept to production-grade solutions. Youll design and build secure, scalable systems for user management, authentication, authorization, and data access that serve thousands of developers across us.

You will:

Design, develop, and build platform-wide authentication and authorization services, creating a cohesive identity fabric that integrates seamlessly with multiple identity vendors and systems.

Lead the evolution of the data consumption layer, enabling governed, efficient, and context-aware access to data across the CX ecosystem.

Drive architectural decisions from concept to production, ensuring solutions are secure, scalable, and optimized for both developer experience and operational excellence.

Leverage AI and automation to enhance access control, anomaly detection, and developer productivity - turning complex platform insights into actionable intelligence.

Collaborate cross-functionally with product, data, and infrastructure teams to build interoperable solutions that power our next-generation developer platform.

Influence platform-wide engineering standards, promoting robust design, observability, and maintainability across services.

Champion developer experience, crafting APIs, SDKs, and tools that simplify integration and accelerate innovation.

Mentor and guide engineers, fostering a culture of technical depth, curiosity, and impact-driven innovation.
Requirements:
Minimum Qualifications:

8+ years of professional software engineering experience, with proven ability to design, implement, and deliver complex distributed systems in production.

Strong problem-solving, debugging, and system-design skills, with a focus on scalability and maintainability.

Validated experience in backend or full-stack development using one or more of the following languages: Java, TypeScript/Node.js, Go, or Python.

Proven understanding of distributed systems, microservices architecture, and RESTful or GraphQL APIs.

Hands-on experience with cloud-native development on AWS, including containerized workloads running on EKS (Kubernetes).

Proficiency with databases - relational (e.g., PostgreSQL) or NoSQL (e.g., MongoDB, Redis, OpenSearch) - and familiarity with data-driven application design.

Deep understanding of authentication, authorization, and modern identity and access management concepts.

Familiarity with streaming and messaging systems, such as Apache Kafka.

Preferred Qualifications:

Experience building or integrating with multiple identity providers (e.g., Okta, Azure AD, Ping) and designing identity fabric or zero-trust architectures.

Exposure to AI-driven platforms, leveraging AI/ML for developer productivity, anomaly detection, or access intelligence.

Knowledge of Infrastructure as Code (IaC) tools such as Helm and Terraform, and familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry).

Background in security-focused design, including secrets management, policy-as-code, and compliance automation.

Experience contributing to platform engineering or developer-enablement initiatives in large-scale environments.

Passion for innovation, continuous improvement, and building tools that make developers lives easier.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545937
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior DevOps Engineer with strong engineering skills and a product mindset, who is passionate about platform engineering, developer experience (DevEx), and AI-assisted automation.
This role is ideal for someone who enjoys building scalable internal platforms and paved roads that empower R&D teams to independently take services from design to production. You will focus on reducing friction, increasing developer autonomy, and embedding best practices through automation and tooling.
Location: Tel Aviv (Hybrid)
About Us
we are a global leader in cybersecurity, delivering advanced security solutions that protect organizations worldwide.
The Harmony SASE rocket is building a cloud-native, high-scale security platform that enables secure connectivity for the modern, distributed workforce.
Our DevOps Platform team supports Harmony SASE R&D by providing the infrastructure, CI/CD, observability, and tooling that allow teams to move fast while operating safely in production. We are heavily investing in Platform Engineering, DevEx, and AI-assisted operations to scale our engineering velocity and reliability.
Key Responsibilities
Design and build platform capabilities and self-service tooling that enable R&D teams to deploy and operate services independently.
Develop and maintain Infrastructure as Code and deployment patterns for large-scale cloud environments.
Build and evolve CI/CD pipelines and automation using GitHub Actions and cloud-native services.
Improve developer experience, observability, and operational readiness across production systems.
Explore and integrate AI-driven automation and intelligent tooling into DevOps and platform workflows.
Collaborate with R&D and architecture teams to support new services from design to production.
Requirements:
6+ years of experience in DevOps / Platform / Infrastructure Engineering, including ownership of large-scale production environments.
Experience building infrastructure solutions for high-scale SaaS systems.
Strong hands-on experience with Infrastructure as Code (Terraform, Terragrunt, or Pulumi).
Strong programming skills in Python or Go.
Experience designing and maintaining CI/CD pipelines, preferably with GitHub Actions.
Strong experience with AWS, including services such as ECS, EKS, Lambda, and API Gateway.
Solid understanding of microservices architecture, Linux systems, and cloud networking.
Experience with monitoring and logging tools such as Datadog, Prometheus, and Grafana.
Advantages
Experience with platform engineering, internal developer platforms, or DevEx initiatives.
Hands-on experience integrating AI-assisted automation or tooling into engineering workflows.
Experience with HashiCorp tools (Vault, Consul, Nomad).
Familiarity with configuration management tools (Ansible, Chef).
Strong networking fundamentals (DNS, HTTP/S, proxies, CDN).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8540400
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 19 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for an experienced DevOps Engineer to join our high-performance team. Youll work closely with development teams to design and implement smarter processes and tools, while embracing a GenAI-driven mindset. In this role, you will help build and scale infrastructure that not only keeps our company running smoothly, but also powers next-generation AI-driven applications with speed, resilience, and efficiency.
our companys Technology Stack sample:
AWS, Kubernetes, Terragrunt, Ansible, Jenkins, ArgoCD, Argo-Workflows, Service Mesh, Nginx, CloudFlare, Hashicorp Vault/Consul, Kafka, RabbitMQ, Prometheus, Grafana, VictoriaMetrics, CircleCI
Programming languages: Python, NodeJS, Go, Kotlin
What am I going to do?
Maintain and build a large-scale, highly available cloud infrastructure focusing on K8S.
Improve resiliency and cost efficiency of our cloud infrastructure.
Use GenAI tools to automate troubleshooting, speed up incident resolution, and improve production reliability.
Develop AI-driven self-service solutions to accelerate developer issue resolution and resource provisioning.
Develop and adopt new tools to make Development and Operations processes at our company more efficient.
Collaborate with developers to optimize system performance, reliability, and scale.
Evolve and maintain our companys AWS infrastructure by improving and adopting new services.
Support AI/ML/GenAI services with scalable infrastructure and monitoring.
Maintain our company availability by participating in DevOps on-call shifts.
Mentor DevOps engineers.
Requirements:
4+ years of hands-on DevOps / Platform Engineering experience in production environments within a public cloud environment (AWS preferred)
Strong, production-grade Kubernetes experience (design, deployment, scaling, and troubleshooting) with solid AWS experience (VPC, IAM, EC2, EKS, Load Balancers, DNS)
Experience designing and operating highly available, scalable infrastructure systems
Experience with managed and distributed databases (AWS Aurora, RDS, MongoDB, Redis)
Hands-on experience with Infrastructure as Code and configuration management (Terraform required, Terragrunt & Ansible - advantage)
Experience with Docker and containerized workloads
2+ years of experience building and maintaining CI/CD pipelines (Jenkins, GitHub Actions)
Proficiency in Python for automation and strong Linux administration skills
Experience with monitoring and observability tools (Prometheus, Grafana)
Development experience and familiarity with GenAI platforms (AWS Bedrock, Vertex AI, OpenAI) - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8582739
סגור
שירות זה פתוח ללקוחות VIP בלבד