דרושים » מחשבים ורשתות » DevOps Engineer (Data Platform Group)

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store.
DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines - Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8566256
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows.
Requirements:
5+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack - Must:
1. Snowflake, Delta Lake, Iceberg, BigQuery, Redshift.
2.Kafka, RabbitMQ, or similar for real-time data processing.
3.Pyspark, Databricks.
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must.
Hands-on experience with Docker and Kubernetes. - Must.
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc).
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8566263
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
seeking a Data Infra Engineer. You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.

Responsibilities

Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 3+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8547799
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
12/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As we continue to grow, we seek an experienced DevOps Engineer to join our DevOps team.

You'll play a pivotal role in shaping our cloud infrastructure and fostering a DevOps-oriented culture. Here's what you'll be doing:

Build and design efficient and high-quality cloud architectures.
Ensure seamless integration of DevOps principles into our organizational culture.
Work closely with our development teams during architecture planning and design phases.
Champion DevOps practices and facilitates continuous learning across various R&D domains.
Mentor and guide developers on best practices for efficient and automated workflows.
Manage, monitor, and scale our distributed highly available SaaS platform.
Troubleshoot issues and optimize performance.
Utilize major technologies to deliver high-scale SaaS and big data environments on Public Clouds.
Build, implement, and manage our IaC methodology using Terraform.
Play a key role in our FinOps activities, monitoring, maintaining, and reducing cloud costs.
The DevOps Engineer will be part of our cross-organization DevOps

This is a unique opportunity to join a startup that has a meaningful impact on the well-being and mental health of thousands.
Requirements:
4+ years of experience in DevOps roles.
At least two years of experience with Docker, Kubernetes, and Helm.
3+ years of coding/scripting experience in Python, Go, or Bash.
Linux systems administration experience.experience with cloud providers like AWS, Google Cloud, or Azure.

At least 3+ years of experience with CI/CD tools such as GitHub Actions/GitLab /Jenkins /CircleCI /Team City or similar
Experience with IAC principles and tools such as Terraform
Experience with Alerting & Monitoring systems such as DataDog / Splunk / New Relic / Prometheus, or similar
Experience with Cloud Networking and Security - Connectivity, Load-balancer, DNS
High Analytical & Troubleshooting skills - the ability to solve complex problems
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8544171
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
Requirements:
3+ years of hands-on DevOps experience building, shipping, and operating production systems.
Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8569980
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions-including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
🔧 What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
6-10+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have:
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8528005
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Responsibilities
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.

Preferred Skills (Nice to Have):
Experience with Google Cloud Platform (GCP) .
Experience with Dataplex for data cataloging and governance.
Knowledge of streaming technologies (Kafka, Confluent).
Experience with Looker.
Cloud certifications (Google Professional Data Engineer, Google Cloud Architect).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531425
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Fiverr is looking for an experienced DevOps Engineer to join our high-performance team. You’ll work closely with development teams to design and implement smarter processes and tools, while embracing a GenAI-driven mindset. In this role, you will help build and scale infrastructure that not only keeps Fiverr running smoothly, but also powers next-generation AI-driven applications with speed, resilience, and efficiency. Fiverr’s Technology Stack sample: AWS, Kubernetes, Terragrunt, Ansible, Jenkins, ArgoCD, Argo-Workflows, Service Mesh, Nginx, CloudFlare, Hashicorp Vault/Consul, Kafka, RabbitMQ, Prometheus, Grafana, VictoriaMetrics, CircleCI Programming languages: Python, NodeJS, Go, Kotlin

What am I going to do?:

* Maintain and build a large-scale, highly available cloud infrastructure focusing on K8S.
* Improve resiliency and cost efficiency of our cloud infrastructure.
* Use GenAI tools to automate troubleshooting, speed up incident resolution, and improve production reliability.
* Develop AI-driven self-service solutions to accelerate developer issue resolution and resource provisioning.
* Develop and adopt new tools to make Development and Operations processes at Fiverr more efficient.
* Collaborate with developers to optimize system performance, reliability, and scale.
* Evolve and maintain Fiverr’s AWS infrastructure by improving and adopting new services.
* Support AI/ML/GenAI services with scalable infrastructure and monitoring.
* Maintain Fiverr availability by participating in DevOps on-call shifts.
* Mentor DevOps engineers.

Equal opportunities:
At Fiverr, we’re not about checklists. If you don’t meet 100% of the requirements for this role but still feel passionate about the position and think you have the right skills and qualifications to excel at it, we want to hear from you. At Fiverr, we prioritize diversity. We celebrate difference and embed it into every aspect of our workplace and product, as well as our community. Fiverr is proud and committed to providing equal opportunity employment to all individuals regardless of race, color, religion, sex, sexual orientation, citizenship, national origin, disability, Veteran status, or any other characteristic protected by law. In addition, Fiverr will provide accommodation to individuals with disabilities or a special need.
Requirements:
* 4+ years of hands-on DevOps / Platform Engineering experience in production environments within a public cloud environment (AWS preferred)
* Strong, production-grade Kubernetes experience (design, deployment, scaling, and troubleshooting) with solid AWS experience (VPC, IAM, EC2, EKS, Load Balancers, DNS)
* Experience designing and operating highly available, scalable infrastructure systems
* Experience with managed and distributed databases (AWS Aurora, RDS, MongoDB, Redis)
* Hands-on experience with Infrastructure as Code and configuration management (Terraform required, Terragrunt & Ansible – advantage)
* Experience with Docker and containerized workloads
* 2+ years of experience building and maintaining CI/CD pipelines (Jenkins, GitHub Actions)
* Proficiency in Python for automation and strong Linux administration skills
* Experience with monitoring and observability tools (Prometheus, Grafana)
* Development experience and familiarity with GenAI platforms (AWS Bedrock, Vertex AI, OpenAI) – advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8557148
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This is a great opportunity to be part of one of the fastest-growing infrastructure companies in history, an organization that is in the center of the hurricane being created by the revolution in artificial intelligence.
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8572794
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior DevOps Engineer.
As a Senior DevOps Engineer, youll design and maintain the secure, scalable cloud infrastructure powering our AI healthcare solutions.
Youll build automation, CI/CD pipelines, and monitoring systems that ensure reliability and compliance in a fast-paced environment. Your work will directly enable our teams to innovate faster while keeping patient data safe and systems resilient.
What Youll Do:
Architect and manage secure, scalable cloud infrastructure (AWS)
Maintain CI/CD pipelines to streamline deployment and delivery
Automate provisioning, monitoring, and incident response workflows
Troubleshoot and resolve production issues, ensuring uptime and performance
Develop tools that accelerate development and improve operations
Collaborate with teams to uphold security, compliance, and reliability standards
Requirements:
6+ years in DevOps/cloud infrastructure for SaaS or security-focused companies
Expertise in AWS - a must
Hands-on experience with Terraform and GitHub Actions CI/CD (or Jenkins, Gitlab)
Production experience with Kubernetes
Strong Python scripting for automation
Operational excellence mindset with clear communication skills
Experience working with AI tools such as - N8N, Agent-core. etc. - big advantage!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8555213
סגור
שירות זה פתוח ללקוחות VIP בלבד