דרושים » ניהול ביניים » Senior DevOps Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced cloud solution architect (5+ years) to design, implement, and manage cloud infrastructure and CI/CD pipelines for big data solutions using Kubernetes and AWS. If you are passionate about DevOps and big data, wed love to hear from you! Availability to work from the Jerusalem office at least one day a week possibly more during the initial onboarding period.
What will your job look like?
Design and manage cloud infrastructure on AWS using IaC tools like Terraform or CloudFormation.
Optimize Kubernetes clusters for big data workloads.
Develop and maintain CI/CD pipelines using tools like ArgoCD and GitLab CI/CD.
Implement monitoring and logging solutions for big data systems.
Ensure compliance with security best practices.
Collaborate with development and operations teams.
Troubleshoot and resolve complex technical issues.
Requirements:
5+ years of DevOps experience, focusing on big data.
Expertise in Kubernetes, AWS, and big data technologies (e.g., Spark, Athena, Argo Workflows).
Proficiency in scripting (Bash, Python, SQL).
Experience with CI/CD tools, containerization (Docker), and configuration management (e.g., Ansible, Puppet).
Strong problem-solving and communication skills.
Preferred Qualifications:
Certifications in AWS or Kubernetes.
Familiarity with cloud-native technologies (Helm, Operators).
Knowledge of security best practices for cloud and big data.
Experience with monitoring tools (Prometheus, Grafana).
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8316102
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
משרה בלעדית
לפני 5 שעות
Job Type: More than one
We are looking for a highly motivated DevOps Engineer with at least 2 years of hands-on experience to join our growing team. You will play a key role in automating, monitoring, and optimizing our cloud infrastructure, CI/CD pipelines, and deployment processes. You will work closely with development, QA, and IT teams to ensure seamless integration and delivery.

Responsibilities:

Design, build, and maintain scalable and secure infrastructure using IaaC tools.
Implement and manage CI/CD pipelines.
Automate deployment and monitoring processes.
Monitor system performance and ensure high availability.
Collaborate with development and QA teams to streamline workflows.
Maintain and optimize cloud-based environments (e.g., AWS, GCP, Azure).
Ensure security and compliance best practices.
Requirements:
2+ years of experience in a DevOps or related engineering role.
Strong experience with Linux system administration.
Proficiency in cloud platforms: AWS (preferred), GCP, or Azure.
Experience with CI/CD tools: Jenkins, GitLab CI, CircleCI, or similar.
Proficient in Infrastructure as Code (IaC) tools: Terraform, CloudFormation, or Pulumi.
Strong scripting skills in Bash, Python, or similar.
Familiarity with Docker and container orchestration using Kubernetes.
Experience with monitoring/logging tools: Prometheus, Grafana, ELK/EFK, Datadog, etc.
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8302149
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8325726
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/08/2025
חברה חסויה
Location: Tel Aviv-Yafo and Yokne`am
Job Type: Full Time
we are at the forefront of the AI revolution, delivering cutting-edge accelerated compute platforms for global impact. Our Network Insights group is seeking a talented and motivated Sr. DevOps Engineer to architect, scale, and optimize the DevOps infrastructure supporting our advanced networking simulation services. In this high-impact role, you will lay the foundations to scale a key insight product to reach 10100 times more users, design robust CI/CD pipelines, drive automation, and ensure the reliability, scalability, and security of our cloud-based, and on-prem platforms.. If you are passionate about solving complex infrastructure challenges and enabling world-class software delivery, we want to hear from you.
What You'll Be Doing:
Architect and optimize CI/CD pipelines for large-scale, high-availability simulation services, ensuring fast, reliable, and secure deployments.
Drive automation across infrastructure provisioning, configuration management, and monitoring to support rapid development cycles and minimize manual intervention.
Collaborate with software engineering and product teams to design and implement scalable, cloud-native solutions that meet evolving business needs.
Promote standard processes in infrastructure as code, containerization, and cloud security, ensuring compliance and resilience across environments.
Monitor, troubleshoot, and resolve infrastructure and deployment issues, maximizing uptime and ensuring efficient performance for internal and external customers.
Evaluate and integrate new tools and technologies to continually enhance the reliability, observability, and efficiency of our DevOps ecosystem.
Participate in incident response and post-mortem processes, driving root cause analysis and systemic improvements.
Requirements:
BSc or above in Computer Science, Computer Engineering, or a related field, or equivalent experience.
5+ overall years of hands-on experience in DevOps or Site Reliability Engineering roles.
Proven expertise in designing, building, and maintaining CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions, or similar).
Deep knowledge of cloud platforms (AWS, preferably), On-Prem deployment, container orchestration (Kubernetes, Docker), and infrastructure as code.
Strong scripting and automation skills (Python, Bash, or similar).
Experience with monitoring, logging, and observability tools (Prometheus, Grafana, ELK, etc.).
Proven understanding of security standard methodologies in cloud & on-prem DevOps environments.
Excellent communication and interpersonal skills, with a track record of multi-functional collaboration.
Experience supporting large-scale, high-availability production systems.
Ways to Stand Out From the Crowd:
Prior background in networking or simulation environments.
Prior experience with building a new team from the grounds up.
Familiarity with performance tuning and cost optimization in cloud and on-prem environments.
Experience with building CI/CD pipelines from the ground up.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8322880
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This is a great opportunity to be part of one of the fastest-growing infrastructure companies in history, an organization that is in the center of the hurricane being created by the revolution in artificial intelligence.
"our company's data management vision is the future of the market."- Forbes
we are the data platform company for the AI era. We are building the enterprise software infrastructure to capture, catalog, refine, enrich, and protect massive datasets and make them available for real-time data analysis and AI training and inference. Designed from the ground up to make AI simple to deploy and manage, our company takes the cost and complexity out of deploying enterprise and AI infrastructure across data center, edge, and cloud.
Our success has been built through intense innovation, a customer-first mentality and a team of fearless company ronauts who leverage their skills & experiences to make real market impact. This is an opportunity to be a key contributor at a pivotal time in our companys growth and at a pivotal point in computing history.
The DevOps Engineer position is an operational engineering role and is an integrated part of our development team. You will be responsible for improving the efficiency of our processes, software, and infrastructure, and will be assisting RnD Team with product development. If you are DevOps Engineer that is passionate about automating and scaling everything, this job is for you.
Responsibilities
Monitor and optimize cloud infrastructure for performance, scalability, and cost-efficiency.
Manage and Maintain CI Infrastructure (GitLab CI and Jenkins).
Manage, Maintain and Improve our Release and Development Environments.
Support critical production infrastructure deployed in Multiple Clouds (AWS, Azure, and GCP).
Develop and Support RnD toolchain and implement best practices for code deployment, testing, and maintenance.
Automate On-Premises Labs Infrastructure by adopting IaC practices.
Lead and Develop Monitoring, Telemetry, Alerting, and Logging Production services.
Requirements:
Desired Qualifications:
Proven hands-on experience with Docker and Kubernetes in production. Hands-on experience deploying and managing complex Kubernetes environments, including services, ingresses, load balancers, and Helm charts
Solid understanding of Linux/Unix Internals and experience with handling complex performance and configuration problems in Linux/Unix environment.
Multi-Cloud Expertise: Deep familiarity with both GCP and AWS for provisioning, networking, and cost-optimization strategies
Experience in DSL Configuration tools like Ansible, Chef, or Puppet.
Experienced with programming languages (Python is preferred).
Shell scripting experience.
Proficient in SRE\Monitoring methodologies (Monitoring stacks with emphasis on Prometheus)
Nice To Have Skills
Experienced with CI/CD tools and frameworks.
Experience with managing binary repositories (RPMs, Pypi, NPM and etc)
Experience with developing Ansible collections, roles, and modules.
Experience with managing GitLab and GitLab CI.
Experience with Hashicorp Products: Terraform, Packer, Consul, Vault, and Vagrant.
Experience with automating configuration and deployment of On-Premises Lab Hardware.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8325791
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior DevOps Engineer to join our R&D team in developing the next rising product in the health tech landscape. If you are looking for a challenging, influential position and are passionate about making an impact, this might be the role for you.
As a Senior DevOps Engineer , youll play a key role in the design, development, testing, deployment, and monitoring of our infrastructure and products. In this position, you'll make significant contributions to our observability stack, helping build and maintain robust systems for logs, metrics, traces, and alerting.
Our ideal candidate is passionate about DevOps and observability, has strong communication skills, and thrives on constant improvement for both technology and processes. If you enjoy working on multiple projects in parallel and are a proactive team player, youll fit right in.
This is a unique opportunity to join the core team of a fast-growing startup, where your contributions will have a direct impact on our product and success.
Responsibilities:
Support and collaborate with cross-functional engineering teams using cutting-edge technologies.
Contribute to the design, implementation, and maintenance of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, Loki)
Secure, scale, and manage our cloud environments (AWS and GCP)
Design and implement automation solutions for both development and production
Manage and improve our CI/CD pipelines for fast and safe delivery
Lead best practices in infrastructure, observability, configuration management, and system hardening
Continuously assess and improve existing infrastructure in line with industry standards
Requirements:
BSc in Computer Science, Engineering, or equivalent experience
5+ years of experience as a DevOps Engineer or similar software engineering role
Proven experience with Docker and Kubernetes (EKS preferred)
Hands-on experience with monitoring and observability tools, including Prometheus, Grafana, Datadog, or similar.
Expertise in Terraform for AWS infrastructure-as-code deployments
Strong collaboration and interpersonal communication skills
Excellent analytical thinking and problem-solving mindset
Proficiency with relational databases
Solid knowledge of Python and Bash scripting
Experience with test automation an advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8320472
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online users physical and cognitive digital behavior to protect individuals online. our companys mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust, and ease coexist.Today, 32 of the world's largest 100 banks and 210 total financial institutions rely on our company Connect to combat fraud, facilitate digital transformation, and grow customer relationships.. our companys Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps our company to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, our company continues to innovate to solve tomorrows problems.
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store.
DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8323583
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/08/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a skilled and motivated Senior DevOps Engineer with approximately 3 years of experience working with Microsoft Azure to join our dynamic team
Responsibilities:
Manage, configure, and maintain Azure cloud infrastructure.
Automate deployment processes using Infrastructure as Code (IaC) tools such as Terraform, ARM templates, or Azure DevOps pipelines.
Collaborate with development and operations teams to optimize application deployments and scaling in Azure.
Implement monitoring, logging, and alerting solutions to ensure high availability and performance.
Troubleshoot and resolve issues within cloud environments.
Requirements:
At least 3 years of hands-on experience with the Microsoft Azure cloud and GCP platform.
Expertise in Azure services such as Azure Resource Manager (ARM), Azure DevOps, and Azure Kubernetes Service (AKS).
Proficiency with Infrastructure as Code (IaC) tools like Terraform, ARM templates, or similar.
Experience with CI/CD pipeline creation and maintenance.
Familiarity with containerization (Docker) and container orchestration (Kubernetes).
Ability to write scripts for automation (Bash, PowerShell, Python).
Strong problem-solving skills and the ability to work in a fast-paced environment.
Nice to Have:
Azure certifications (e.g., Azure Solutions Architect, Azure DevOps Engineer).
Knowledge of security best practices in cloud environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8315755
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
27/08/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
We seek a highly-skilled Senior Site Reliability Engineer to join our team! In this role, you will drive best practices, optimize operational workflows, and mentor junior engineers, fostering a culture of collaboration and innovation. This is an exciting opportunity for someone passionate about building and integrating services and systems that ensure the availability, performance, and reliability of our SaaS environments. You will lead large-scale, cross-functional initiatives, You will work closely with P&E engineering and Cloud teams to design, build, and maintain scalable, resilient infrastructure while championing best practices for automation, monitoring, and incident response. If you're eager to make a significant impact in a fast-paced, high-growth environment, we encourage you to apply.
As a Senior Site Reliability Engineer you will
Lead and groom the team towards technical solutions guided by a strong understanding of the latest and greatest technologies like Kubernetes, Helm, Terraform, and more
Advocate, build, and manage scalable and reliable services and infrastructure to support our SaaS services
Apply SRE best practices, including incident management, performance and capacity planning, and disaster recovery flows
Drive the reliability, performance, and availability of our SaaS products, ensuring service-level objectives are met or exceeded
Design, develop, and manage large-scale systems with CI/CD in mind, to support multiple production environments and use cases
Tackle large-scale production issues and bring out-of-the-box thinking to the table
Evaluate new cloud-native technologies and vendor products to continuously improve our SaaS offering.
Requirements:
5+ years of relevant DevOps or SRE experience in large-scale production environments
2+ years of infrastructure automation, configuration management, or container orchestration using Kubernetes, Docker, Terraform, and Ansible
2+ years in Python or any other advanced programming language
Strong ability to lead, design, and execute cross-organization projects
Experience in managing container and infrastructure orchestration tools (e.g. Kubernetes, Terraform)
Hands-on experience administering public clouds (AWS, GCP, or Azure)
Experience with building CI/CD pipelines for applications and microservices (Jenkins/ArgoCD)
Experience with chaos, alerting & observability tools (Gremlin, PagerDuty, Opsgenie, New Relic, Coralogix).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8321545
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
20/08/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior DevOps Engineer to join our newly formed Foundations Teama small, high-impact group responsible for the infrastructure, tools, and shared services that power our entire R&D organization.
In this role, youll design, build, and evolve internal platform infrastructure, CI/CD systems, and developer enablement tooling. Your mission is to empower developers across the company to work autonomously, by creating self-service tools, automation, and clear standards that reduce friction and increase reliability.
Youll collaborate closely with engineers across disciplines and partner with the Foundations Team Lead to shape DevOps practices that scale. This is a hands-on role for someone who thrives in high-velocity, mission-critical environments and is passionate about building tools that make developers faster, more productive, and confident in running their own services.
What Youll Do
Design and maintain scalable, developer-friendly CI/CD pipelines and deployment workflows.
Build self-service tooling and automation that enables teams to manage deployments, environments, secrets, and observability independently
Be responsible for cloud infrastructure and operations foundations
Implement and promote best practices for monitoring, logging, and alerting across services.
Operate and optimize Kubernetes-based production environments, ensuring performance, security, and stability.
Manage infrastructure using Infrastructure as Code (IaC) and ensure repeatability and traceability through tools like Terraform.
Collaborate with R&D teams to support onboarding to internal tooling and promote a culture of enablement over dependency.
Monitor cloud cost, ensuring our cloud operates efficiently.
Requirements:
4+ years of hands-on experience in DevOps or infrastructure engineering, ideally in high-velocity, mission-critical production environments.
Deep expertise in Kubernetes and containerized infrastructure, with experience deploying and managing workloads at scale.
Strong understanding of cloud infrastructure and operations, including networking, storage, compute, and securityGCP experience preferred.
Proficiency with Infrastructure as Code tools, especially Terraform, with a focus on automation and operational excellence.
Experience developing and managing CI/CD processes and tools, with a passion for improving developer workflows and release quality.
Strong debugging and problem-solving skills, with the ability to troubleshoot complex systems across the stack.
Highly self-motivated and organized, able to work independently in a fast-paced, collaborative environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8311657
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 1 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior DevOps Engineer, Data Platform
The opportunity
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale.
Requirements:
3+ years of hands-on DevOps experience building, shipping, and operating production systems.
Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
You might also have
Data Pipeline Orchestration - Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Prefect, Kubernetes operators) and implementing GitOps practices for data workloads
Data Engineer Experience Focus - Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
Data Infrastructure Deep Knowledge - Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8344840
סגור
שירות זה פתוח ללקוחות VIP בלבד