דרושים » ניהול ביניים » DevOps Engineer (Data Platform Group)

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 7 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a DevOps Engineer (Data Platform Group).
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store.
DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines - Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8509784
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 7 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer.
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows
Requirements:
5+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack - Must:
Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Kafka, RabbitMQ, or similar for real-time data processing.
Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must
Hands-on experience with Docker and Kubernetes. - Must
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8509776
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior DevOps Engineer with extensive experience to lead the design, development, deployment, and operation of large-scale software solutions. This role is a critical bridge between Software Engineering and Infrastructure, demanding a deep proficiency in building and operating reliable, scalable systems within a complex Big Data environment.
What you'll be doing all day:
Own Reliability and Scalability: Lead the architecture and implementation of best practices to ensure high availability, optimal performance, and horizontal scalability of our critical systems, operating within a vast Big Data landscape.
Infrastructure as Code (IaC): Develop, maintain, and evolve our infrastructure using advanced IaC tools (e.g., Terraform or Pulumi), ensuring full automation of service deployment and management across our AWS/GCP cloud environment.
Strategic Collaboration: Partner closely with application software engineering teams to design, conduct code reviews, and implement systems that are stable, secure, and performant.
Observability: Implement and manage robust monitoring, logging, and alerting solutions to enable proactive identification and deep Root Cause Analysis (RCA) of issues.
Automation & Efficiency: Identify and eliminate manual tasks ("Toil") by automating repetitive processes to continuously improve operational efficiency and system reliability.
Production Incident Response: Participate in an on-call rotation to quickly investigate, troubleshoot, and mitigate critical production incidents, driving post-mortems to prevent recurrence.
Performance Engineering: Analyze system performance, conduct performance tuning, and execute capacity planning to meet future demands.
Requirements:
Proven Experience: 5+ years of experience as a Production Engineer, DevOps Engineer, or SRE, running and managing large-scale operations on a major cloud provider (AWS or GCP).
Coding Proficiency: 5+ years of experience developing server-side applications or tooling using languages like Python, Java, Node.js, or Go.
Deep Infrastructure Knowledge: Strong understanding of Kubernetes and container orchestration, complemented by solid knowledge of Web Servers (e.g., Nginx), Load Balancers, Caching Systems (e.g., Redis/Memcached), Databases (Relational and NoSQL), and networking fundamentals.
CI/CD & GitOps: Practical experience with modern CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI) and familiarity with GitOps principles.
Communication: Excellent communication and collaboration skills to coordinate effectively across various R&D and Infrastructure groups.
Passion: Eagerness to take on complex challenges and a continuous desire to learn and implement new, cutting-edge technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471349
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior DevOps Engineer, Data Platform
The opportunity
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale.
Requirements:
3+ years of hands-on DevOps experience building, shipping, and operating production systems.
Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
You might also have
Data Pipeline Orchestration - Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Prefect, Kubernetes operators) and implementing GitOps practices for data workloads
Data Engineer Experience Focus - Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
Data Infrastructure Deep Knowledge - Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8454296
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
The Performance Marketing Analytics team is seeking a highly skilled Senior Data platform Engineer to establish, operate, and maintain our dedicated Performance Marketing Data Mart within the Snowflake Cloud Data Platform. This is a critical, high-autonomy role responsible for the end-to-end data lifecycle, ensuring data quality, operational excellence, and governance within the new environment. This role will directly enable the Performance Marketing team's vision for data-driven marketing and increased ownership of our analytical infrastructure
Responsibilities
Snowflake Environment Management
Administer the Snowflake account (roles, permissions, cost monitoring, performance tuning).
Implement best practices for security, PII handling, and data governance.
Act as the subject matter expert for Snowflake within the team.
DevOps & Model Engineering
Establish and manage the development and production environments.
Maintain CI/CD pipeline using GitLab to automate the build, test, and deployment process.
Implement normal engineering practices such as code testing and commit reviews to prevent tech debt.
Data Operations & Reliability
Monitor pipeline executions to ensure timely, accurate, and reliable data.
Set up alerting, incident management, and SLAs for marketing data operations.
Troubleshoot and resolve platform incidents quickly to minimize business disruption.
Tooling & Integration
Support the integration of BI, monitoring, and orchestration tools
Evaluate and implement observability and logging solutions for platform reliability.
Governance & Compliance
Ensure alignment with Entain data governance and compliance policies.
Document operational procedures, platform configurations, and security controls.
Act as the team liaison with procurement, infrastructure, and security teams for platform-related topics.
Collaboration & Enablement
Work closely with BI, analysts and data engineers, ensuring the platform supports their evolving needs.
Provide guidance on best practices for query optimization, cost efficiency, and secure data access.
Requirements:
At least 4 years of experience in data engineering, DevOps, or data platform operations roles.
Expert Proficiency in Snowflake: 2+ years of Deep, hands-on experience with Snowflake setup, administration, security, warehouse management, performance tuning and cost management.
Programming: Expertise in SQL, and proficiency in Python for data transformation and operational scripting
Experience implementing CI/CD pipelines (preferably GitLab) for data/analytics workloads
Hands-on experience with modern data environments (cloud warehouses, dbt, orchestration tools)
Ability to work effectively in a fast-paced and dynamic environment
Bachelor's degree in a relevant field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471910
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a talented and motivated DevOps Engineer to join our Cloud Engineering team. Youll play a key role in developing, maintaining, and scaling our SaaS product across AWS, Azure, and GCP. This includes managing our deployment packages (Terraform, CloudFormation, Azure Bicep), ensuring seamless integrations with customer environments, and enabling secure, reliable data scanning at scale.

As part of our DevOps team, youll not only drive automation and infrastructure management, but also participate in customer-facing installation meetings, helping customers deploy and configure our platform successfully.

Responsibilities:
Design, develop, and maintain cloud infrastructure on AWS, Azure, and GCP.
Manage Infrastructure as Code using Terraform, CloudFormation, and Azure Bicep.
Build, scale, and maintain Kubernetes clusters and containerized applications.
Implement automation for deployment, monitoring, and incident response.
Write and maintain Python and Bash scripts for automation, integrations, and tooling.
Troubleshoot networking, connectivity, and security issues (TCP/IP, UDP, VPNs).
Collaborate with engineering and product teams to optimize deployments.
Support customer onboarding by assisting with setup and deployment meetings.
Continuously improve CI/CD pipelines and operational processes.
Requirements:
Requirements:
35+ years of experience in DevOps, Cloud Engineering, or related roles.
Hands-on expertise with at least two major cloud providers (AWS, Azure, GCP; experience with all three is a plus).
Strong programming skills in Python and Bash (automation, tooling, scripts).
Proficiency in Linux systems administration.
Strong experience with Kubernetes, Docker, and container orchestration.
Deep understanding of networking fundamentals (TCP/IP, UDP, DNS, VPNs, routing, firewalls).
Experience with IaC tools: Terraform, CloudFormation, Azure Bicep.
Familiarity with CI/CD tools (GitHub Actions, GitLab CI, or similar).
Excellent problem-solving and troubleshooting skills.
Excellent communication skills, with the ability to work directly with customers.
Observability stack experience (Datadog, Prometheus, Grafana, ELK, etc.).

Nice to Have:
Experience in SaaS environments or multi-cloud deployments.
Security best practices and compliance knowledge (IAM, RBAC, data protection).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8465449
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a Senior Data Infra Engineer. You will be responsible for designing and building all data, ML pipelines, data tools, and cloud infrastructure required to transform massive, fragmented data into a format that supports processes and standards. Your work directly empowers business stakeholders to gain comprehensive visibility, automate key processes, and drive strategic impact across the company.
Responsibilities
Design and Build Data Infrastructure: Design, plan, and build all aspects of the platform's data, ML pipelines, and supporting infrastructure.
Optimize Cloud Data Lake: Build and optimize an AWS-based Data Lake using cloud architecture best practices for partitioning, metadata management, and security to support enterprise-scale operations.
Lead Project Delivery: Lead end-to-end data projects from initial infrastructure design through to production monitoring and optimization.
Solve Integration Challenges: Implement optimal ETL/ELT patterns and query techniques to solve challenging data integration problems sourced from structured and unstructured data.
Requirements:
Experience: 5+ years of hands-on experience designing and maintaining big data pipelines in on-premises or hybrid cloud SaaS environments.
Programming & Databases: Proficiency in one or more programming languages (Python, Scala, Java, or Go) and expertise in both SQL and NoSQL databases.
Engineering Practice: Proven experience with software engineering best practices, including testing, code reviews, design documentation, and CI/CD.
AWS Experience: Experience developing data pipelines and maintaining data lakes, specifically on AWS.
Streaming & Orchestration: Familiarity with Kafka and workflow orchestration tools like Airflow.
Preferred Qualifications
Containerization & DevOps: Familiarity with Docker, Kubernetes (K8S), and Terraform.
Modern Data Stack: Familiarity with the following tools is an advantage: Kafka, Databricks, Airflow, Snowflake, MongoDB, Open Table Format (Iceberg/ Delta)
ML/AI Infrastructure: Experience building and designing ML/AI-driven production infrastructures and pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478237
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a strong Senior DevOps Engineer to join the hunt!

Responsibilities:

Actively participate in hands-on technical tasks, contributing to the development, deployment, and maintenance of our observability platform.
Design, implement, secure and maintain the infrastructure, CI/CD pipelines, and deployment processes to support the platform on SaaS/ST/On-Prem deployments.
Take full end-to-end responsibility for all infrastructure related projects
Collaborate with the development, SE, QA, and product teams to ensure smooth integration, testing, and deployment of software releases.
Drive automation and scalability efforts to optimize system performance, reliability, and availability.
Continuously monitor and improve the observability, monitoring, and logging systems to ensure the stability and performance of the platform.
Stay up-to-date with the latest industry trends and technologies, and evaluate their potential impact and benefits for .
Manage and maintain environments across AWS and other public cloud providers
Requirements:
5+ years of experience in a DevOps role, with a strong background in infrastructure management, CI/CD methodologies, and automation.
Strong hands-on experience in both cloud and on-premise environments, such as AWS, Azure, GCP, and working with on-premise infrastructure.
Solid understanding of containerization technologies (e.g., Docker), Kubernetes, and related orchestration and deployment tools (e.g., Helm).
Proficiency in infrastructure management and deployment using cloud platforms and on-premise solutions.
Hands-on experience with GitOps workflows and tools like ArgoCD for managing Kubernetes deployments.
Experience with infrastructure as
code tools like Terraform or CloudFormation for both cloud and on-premise environments.
Proficiency in scripting languages like Python, JavaScript, Go or Bash.
Excellence knowledge of Linux based systems
Hands-on experience with monitoring and observability tools like Prometheus, Grafana, DataDog or ELK stack for both cloud and on-premise environments.
Familiarity with agile development practices and the ability to work in a fast-paced, collaborative environment.
Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8508558
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/12/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Production Engineer with extensive experience to lead the design, development, deployment, and operation of large-scale software solutions. This role is a critical bridge between Software Engineering and Infrastructure, demanding a deep proficiency in building and operating reliable, scalable systems within a complex Big Data environment.
What You'll Be Doing All Day:
Own Reliability and Scalability: Lead the architecture and implementation of best practices to ensure high availability, optimal performance, and horizontal scalability of our critical systems, operating within a vast Big Data landscape.
Infrastructure as Code (IaC): Develop, maintain, and evolve our infrastructure using advanced IaC tools (e.g., Terraform or Pulumi), ensuring full automation of service deployment and management across our AWS/GCP cloud environment.
Strategic Collaboration: Partner closely with application software engineering teams to design, conduct code reviews, and implement systems that are stable, secure, and performant.
Observability: Implement and manage robust monitoring, logging, and alerting solutions to enable proactive identification and deep Root Cause Analysis (RCA) of issues.
Automation & Efficiency: Identify and eliminate manual tasks ("Toil") by automating repetitive processes to continuously improve operational efficiency and system reliability.
Production Incident Response: Participate in an on-call rotation to quickly investigate, troubleshoot, and mitigate critical production incidents, driving post-mortems to prevent recurrence.
Performance Engineering: Analyze system performance, conduct performance tuning, and execute capacity planning to meet future demands.
Requirements:
Proven Experience: 5+ years of experience as a Production Engineer, DevOps Engineer, or SRE, running and managing large-scale operations on a major cloud provider (AWS or GCP).
Coding Proficiency: 5+ years of experience developing server-side applications or tooling using languages like Python, Java, Node.js, or Go.
Deep Infrastructure Knowledge: Strong understanding of Kubernetes and container orchestration, complemented by solid knowledge of Web Servers (e.g., Nginx), Load Balancers, Caching Systems (e.g., Redis/Memcached), Databases (Relational and NoSQL), and networking fundamentals.
CI/CD & GitOps: Practical experience with modern CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI) and familiarity with GitOps principles.
Communication: Excellent communication and collaboration skills to coordinate effectively across various R&D and Infrastructure groups.
Passion: Eagerness to take on complex challenges and a continuous desire to learn and implement new, cutting-edge technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8471297
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/12/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we're on a mission to revolutionize behavioral health care. Our cutting-edge platform transforms behavioral health conversations into automated documentation and actionable insights. By doing so, we empower therapists, enhance patient care, and drive better outcomes. As we continue to grow, we seek an experienced DevOps Engineer to join our DevOps team.

You'll play a pivotal role in shaping our cloud infrastructure and fostering a DevOps-oriented culture. Here's what you'll be doing:

Build and design efficient and high-quality cloud architectures.
Ensure seamless integration of DevOps principles into our organizational culture.
Work closely with our development teams during architecture planning and design phases.
Champion DevOps practices and facilitates continuous learning across various R&D domains.
Mentor and guide developers on best practices for efficient and automated workflows.
Manage, monitor, and scale our distributed highly available SaaS platform.
Troubleshoot issues and optimize performance.
Utilize major technologies to deliver high-scale SaaS and big data environments on Public Clouds.
Build, implement, and manage our IaC methodology using Terraform.
Play a key role in our FinOps activities, monitoring, maintaining, and reducing cloud costs.
The DevOps Engineer will be part of our cross-organization DevOps

This is a unique opportunity to join a startup that has a meaningful impact on the well-being and mental health of thousands.
Requirements:
4+ years of experience in DevOps roles.
At least two years of experience with Docker, Kubernetes, and Helm.
3+ years of coding/scripting experience in Python, Go, or Bash.
Linux systems administration experience.experience with cloud providers like AWS, Google Cloud, or Azure.

At least 3+ years of experience with CI/CD tools such as GitHub Actions/GitLab /Jenkins /CircleCI /Team City or similar
Experience with IAC principles and tools such as Terraform
Experience with Alerting & Monitoring systems such as DataDog / Splunk / New Relic / Prometheus, or similar
Experience with Cloud Networking and Security - Connectivity, Load-balancer, DNS
High Analytical & Troubleshooting skills - the ability to solve complex problems
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481218
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 5 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
About us:
A pioneering health-tech startup on a mission to revolutionize weight loss and well-being. Our innovative metabolic measurement device provides users with a comprehensive understanding of their metabolism, empowering them with personalized, data-driven insights to make informed lifestyle choices.
Data is at the core of everything we do. We collect and analyze vast amounts of user data from our device and app to provide personalized recommendations, enhance our product, and drive advancements in metabolic health research. As we continue to scale, our data infrastructure is crucial to our success and our ability to empower our users.
About the Role:
As a Senior Data Engineer, youll be more than just a coder - youll be the architect of our data ecosystem. Were looking for someone who can design scalable, future-proof data pipelines and connect the dots between DevOps, backend engineers, data scientists, and analysts.
Youll lead the design, build, and optimization of our data infrastructure, from real-time ingestion to supporting machine learning operations. Every choice you make will be data-driven and cost-conscious, ensuring efficiency and impact across the company.
Beyond engineering, youll be a strategic partner and problem-solver, sometimes diving into advanced analysis or data science tasks. Your work will directly shape how we deliver innovative solutions and support our growth at scale.
Responsibilities:
Design and Build Data Pipelines: Architect, build, and maintain our end-to-end data pipeline infrastructure to ensure it is scalable, reliable, and efficient.
Optimize Data Infrastructure: Manage and improve the performance and cost-effectiveness of our data systems, with a specific focus on optimizing pipelines and usage within our Snowflake data warehouse. This includes implementing FinOps best practices to monitor, analyze, and control our data-related cloud costs.
Enable Machine Learning Operations (MLOps): Develop the foundational infrastructure to streamline the deployment, management, and monitoring of our machine learning models.
Support Data Quality: Optimize ETL processes to handle large volumes of data while ensuring data quality and integrity across all our data sources.
Collaborate and Support: Work closely with data analysts and data scientists to support complex analysis, build robust data models, and contribute to the development of data governance policies.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field.
Experience: 5+ years of hands-on experience as a Data Engineer or in a similar role.
Data Expertise: Strong understanding of data warehousing concepts, including a deep familiarity with Snowflake.
Technical Skills:
Proficiency in Python and SQL.
Hands-on experience with workflow orchestration tools like Airflow.
Experience with real-time data streaming technologies like Kafka.
Familiarity with container orchestration using Kubernetes (K8s) and dependency management with Poetry.
Cloud Infrastructure: Proven experience with AWS cloud services (e.g., EC2, S3, RDS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8510072
סגור
שירות זה פתוח ללקוחות VIP בלבד