דרושים » תוכנה » Senior Software Engineer - Distributed Systems Quality Platform

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 4 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer - Distributed Systems Quality Platform
About The Position
As a Senior Software Engineer, youll work hands-on on the systems and frameworks that test, stress, and validate complex distributed infrastructure under real-world conditions. Youll help design and build automated environments that simulate scale, concurrency, and failure scenarios, and youll contribute to evolving how we ensure reliability and correctness in modern infrastructure systems.
This role is ideal for engineers with a strong distributed systems background who enjoy deep technical problem-solving, working close to the system, and building tools that improve quality, stability, and confidence at scale.
What Youll Do
Design and implement core components of a distributed testing infrastructure and quality platform.
Build automated frameworks to validate functionality, performance, and resilience at scale.
Collaborate closely with infrastructure, storage, and platform teams to ensure quality is built into the development lifecycle.
Contribute to improving tooling, test coverage, and engineering best practices across the organization.
Requirements:
Strong experience (5+ years) building or working on large-scale distributed systems in areas such as storage, networking, cloud infrastructure, or backend platforms.
Solid understanding of concurrency, system correctness, and reliability in production systems.
Hands-on programming experience in one or more of the following languages: Go, C++, Rust, or Python.
Experience building test frameworks, infrastructure tooling, or internal platforms is a strong advantage.
Curiosity and interest in modern approaches to testing, automation, and system validation (including AI-assisted techniques).
Ability to work independently on complex technical problems while collaborating effectively with cross-functional teams.
Nice to Have
Experience with observability, performance testing, fault injection, or chaos engineering.
Familiarity with CI/CD pipelines for large-scale systems.
Exposure to AI/ML-driven testing or automation tools.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8473987
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 4 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Reqired Staff Software Engineer - Distributed Systems Quality Platform
About the role:
Were building the next-generation testing platform that will redefine how complex distributed systems are validated at scale. Our mission is to create an automated, intelligent testing environment that continuously validates the correctness, performance, and resilience of the Data Platform across every layer of the stack. This is an opportunity to reinvent how next-gen infrastructure is tested, leveraging both proven engineering techniques and the latest AI-driven approaches. Youll work on solving some of the hardest challenges in large-scale systems engineering: How do you validate correctness in deeply parallel I/O systems? How do you uncover reliability gaps that appear only under extreme concurrency? How can AI tools help us automatically generate test scenarios, analyze failures, and predict weak spots before they occur?
As a Staff Software Engineer, youll be at the center of these questions. Youll set the technical direction, design testing frameworks that scale with our product, and partner across engineering to raise the bar for quality.
What Youll Do:
Design and implement a next-generation testing and quality framework for distributed systems, enabling automated validation of functionality, performance, and resilience.
Leverage AI-driven tools to scale the testing environment, including automated test generation, intelligent workload synthesis, and anomaly detection.
Create end-to-end testing environments that simulate real-world scale, stress, and failure conditions.
Define and drive the technical strategy for testing across, setting the standard for quality engineering.
Mentor and influence engineers across teams, fostering a culture of technical rigor and reliability obsession.
Collaborate with product and infrastructure teams to ensure the testing platform is deeply integrated into our development lifecycle.
Requirements:
Extensive experience (8+ years) designing and building large-scale distributed systems in domains like storage, networking, or cloud infrastructure.
Deep knowledge of system correctness, concurrency, and reliability, and how to validate them in practice.
Strong programming skills in languages such as Go, C++, Rust, or Python.
Proven ability to design frameworks or platforms that enable other engineers to move faster while improving quality.
Experience with or interest in AI/ML-driven approaches to testing and validation.
Track record of technical leadership influencing beyond your immediate team, setting vision, and mentoring others.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8474002
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object storebacked data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8442983
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer
Tel Aviv
About the role:
As a Senior Software Engineer on the Data Platform, youll be part of one of our most strategic engineering groups, tasked with building the core data ingestion and processing infrastructure that powers our entire platform. The team is responsible for handling billions of cloud signals daily, ensuring scalability, reliability, and efficiency across our architecture.
Youll work on large-scale distributed systems, own critical components of the cloud security data pipeline, and drive architectural decisions that influence how data is ingested, normalized, and made available for product teams across. Were currently in the midst of a major architectural transformation, evolving our ingestion and processing layers to support real-time pipelines, improved observability, and greater horizontal scalability, and were looking for experienced engineers who are eager to make foundational impact!
Our Stack: Python, Go, Rust, SingleStore, Postgres, ElasticSearch, Redis, Kafka, AWS
On a typical day youll:
Write clean, concise code that is stable, extensible, and unit-tested appropriately
Write production-ready code that meets design specifications, anticipates edge cases, and accounts for scalability
Diagnose complex issues, evaluate, recommend and execute the best solution
Implement new requirements within our Agile delivery methodology while following our established architectural principles
Lead initiatives end to end from design and planning to implementation and deployment, while aligning cross-functional teams and ensuring technical excellence
Test software to ensure proper and efficient execution and adherence to business and technical requirements
Provides input into the architecture and design of the product; collaborating with the team in solving problems the right way
Develop expertise of AWS, Azure, and GCP products and technologies.
Requirements:
Bachelors degree in Computer Science, Engineering or relevant experience
5+ years of professional software development experience
Proven experience building data-intensive systems at scale
Experience in working with micro-service architecture & cloud-native services
Solid understanding of software design principles, concurrency, synchronization, memory management, data structures, algorithms, etc
Hands-on experience with databases such as SingleStore, Postgres, Elasticsearch, Redis
Experience with Python / Go (Advantage)
Experience with distributed data processing tools like Kafka (Advantage).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8466029
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an HPE office.
Job Description:
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutionsincluding the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
610+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8461496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
16/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly talented technical individual to join the Cortex XDR infrastructure team.
The team is responsible for developing automation infrastructure and various cloud based tools and platforms that are used across the research, development and QA departments to ensure the functionality, stability and quality of the XDR product, alongside the efficiency of the infrastructure and process used to build, test and deploy on various clouds and distributions. We believe that the platforms & infrastructure that the team provides are a critical & crucial part of our department's progress to the modern future and one of our key growth factors.
As a Platform engineer you will play a pivotal role in enhancing our development and automation experience by pushing forward modern automation approaches, eliminating manual efforts and introducing new development operations for continuous integration, scale & durability using advanced cloud services. Your expertise will be used in areas such as infrastructure development, cloud based automation, serverless infrastructure, automation as a service, providing technical guidance, and pushing infrastructure\configuration as a code and GitOps approach across the development departments.
To succeed in this role, you should have a strong foundation in modern cloud based automation methodologies and a comprehensive understanding of industry best practices, especially in redundancy and scalability of large systems and the ability to control them via SCM based declarative configs. You should be familiar with modern public clouds approaches and serverless based architectures, including virtualization containers and container based orchestration including multiple Kubernetes based deployments. You should be comfortable engaging in complex technical discussions and advocating for optimal solutions in a fast-paced growing environment as part of our quest for continuous improvement.
Your Impact
Utilize modern technologies including serverless cloud services, Kubernetes, Terraform, among others, and use them all in an infrastructure/configuration as a code GItOps approach to manage everything via source code and continuous integration processes
Design and implement (hands on) the next generation of platforms, automation frameworks, SDKs, and tools to be used across our entire R&D group, and be part of our infrastructure transition to the cloud
Develop and maintain a cloud based test execution system, that supports parallel executions on multiple operating systems and multiple cloud providers and at a very large scale, and by so helping reduce the amount of effort required to perform automatic testing and manual testing, and reduce time to market
Provide tools, systems and simulators for scaling up all lifecycle phases of our products and services including cross company and third party integrations and frameworks to be used in high scale
Introduce progress and help revolutionize our operations and lay the foundation for innovation and growth.
Requirements:
At least 4 years of hands-on experience as one of the following - Platform/InfraOps Engineer, DevOps , Cloud Infrastructure Engineer or equivalent
Hands-on experience working with cloud services in big public Clouds (Azure, AWS, GCP)
Experience with designing and implementing cloud based infrastructure (especially serverless components), alongside using infrastructure as Code tools such as Terraform and Pulumi to automatically build and maintain the provisioned cloud infrastructure
Strong programming skills in Python (or another high level language), with vast experience in Object-Oriented Programming, including Design Patterns, Algorithms and Data Structures
Strong experience with containerization technologies (docker, containerd) and orchestration , especially with various Kubernetes deployments, both self-managed and cloud managed deployments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8460028
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437886
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/12/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
A leading large-scale ad network at the forefront of advertising technology. We are looking for a highly skilled and experienced Senior Software Engineer to join our backend team.
In this role, you will leverage your expertise in distributed systems, data engineering, and software development to design, build, and deploy high-performance solutions. You will take a leading role in developing and maintaining a cutting-edge online machine learning system powered by PyTorch/Tensorflow models and Triton inference server.
This is an opportunity to work on complex, large-scale systems that serve billions of requests, shaping the future of ad tech and ML-driven optimization.
What you'll be doing
Architect and Build: Design, develop, and deploy robust, scalable, and high-performance distributed systems that form the backbone of ironSource's next-generation ML ad network.
Real-Time Ad Serving: Engineer and optimize critical systems for real-time ad serving, enabling machine learning models to make intelligent, low-latency decisions for optimal ad selection.
ML Infrastructure Innovation: Drive the evolution of our ML capabilities by researching, evaluating, and implementing cutting-edge techniques for feature stores, data aggregation, and model serving infrastructure.
Data Pipeline Engineering: Collaborate closely with data scientists and product managers to design, build, and maintain efficient and reliable data lakes and data pipelines, ensuring high-quality data for ML training and analytics.
Operational Excellence: Take ownership of key system components, ensuring their reliability, performance, and scalability in a production environment through proactive monitoring and continuous improvement.
Requirements:
5+ years of backend development experience with strong skills and a genuine passion for server-side technologies.
Proven experience building and maintaining large-scale, low-latency, distributed systems.
Solid understanding of service lifecycle management and efficient resource utilization.
Hands-on experience with machine learning integration in production systems.
Proficiency in backend programming languages such as Java and Scala
Familiarity with cloud platforms (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes).
Strong problem-solving skills, ownership mindset, and ability to thrive in high-impact environments.
You might also have
Experience with ML frameworks such as TensorFlow, PyTorch.
Experience with inference servers such as Triton, TensorFlow Serving.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8454306
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
18/11/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We have improved AI infrastructure by merging GPU virtualization with Kubernetes-native tech to power innovative AI factories. We aim to speed up enterprise AI projects with smart orchestration, and scalability for AI workloads. Seeking a skilled Senior Software Engineer for our Infrastructure Group to innovate AI technology. The Infrastructure Group is tasked with composing and evolving the core systems responsible for thousands of GPUs and nodes driving enterprise AI. We invent the foundation that facilitates elastic, secure, and observable AI operations at extensive scale. We are seeking engineers who are passionate about distributed systems, modern cloud-native infrastructure, and AI performance optimization.

What youll be doing:

Crafting and developing enterprise-grade systems with a strong focus on scalability, reliability, and performance.

Building and optimizing microservices-based architectures using Kubernetes and cloud-native technologies.

Collaborating closely with backend engineers, product managers, and other partners to deliver impactful solutions.

Writing clean, maintainable, and testable code in Go, contributing to our CI/CD pipelines.

Conducting code and build reviews to uphold high-quality standards and mentor team members.

Leading the development and implementation of advanced identity management systems that secure our innovative AI and GPU cloud.

Developing scalable multi-tenant solutions that allow our diverse clientele to harness the power of our platforms securely and efficiently.

Collaborating with multi-functional teams to integrate identity and access management features seamlessly into our products, from cloud services to edge computing devices.
Requirements:
What we need to see:

B.Sc. in Computer Science or a related field (or equivalent experience).

5+ years of experience

Experience in backend software development, including system design and architecture.

Proficiency in at least one backend programming language (Go preferred).

Strong knowledge in microservices architecture, RESTful APIs, and relational databases.

Proficient knowledge of security guidelines and experience applying them in large-scale systems.

Expertise in implementing OAuth, OIDC, SAML, and other modern authentication protocols - Advantage

Ways to stand out from the crowd:

Expertise in Kubernetes internals and advanced cloud-native technologies.

Experience working in Linux environments with knowledge of networking, security, and virtualization.

Contributions to open-source projects or active participation in tech communities.

Agile approach and familiarity with standard methodologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8418975
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer KSPM Team
About the role
As a Senior Software Engineer on the KSPM (Kubernetes Security Posture Management) team, youll be part of a mission-critical group responsible for building and scaling one of the companys fastest-growing products. Our team helps customers secure their Kubernetes environments across major cloud providers by offering deep visibility into misconfigurations, compliance risks, and security best practices.
Youll contribute to a high-impact codebase that scans, analyzes, and interprets complex Kubernetes configurations and behaviors at scale. This is a unique opportunity to shape the future of our KSPM product owning core backend components, driving architectural improvements, and delivering features that address real-world customer needs.
Were looking for engineers who are passionate about infrastructure, cloud security, and solving challenging problems at scale, and who want to make a meaningful impact on the security of modern, cloud-native systems.
Our Stack: Python, Go, K8s APIs, SingleStore, Postgres, Redis, Kafka, AWS, GCP, Azure, ElasticSearch
On a typical day youll
Design, implement, and maintain scalable backend services for onboarding, scanning, and analyzing Kubernetes environments
Collaborate with security researchers and product managers to translate complex requirements into impactful product features
Write clean, efficient, and testable code, ensuring high performance and reliability
Participate in design and code reviews to uphold technical excellence and team standards
Lead features end-to-end from design and planning to deployment and monitoring
Improve system observability, performance, and resilience in production environments
Work closely with cross-functional teams to continuously enhance product capabilities and customer value
Stay current with developments in the Kubernetes ecosystem and cloud security landscape.
Requirements:
Bachelors degree in Computer Science, Engineering, or equivalent experience
5+ years of professional software development experience
Hands on experience with Python or Go Must
Proven experience with microservices architecture and cloud-native systems
Solid foundation in software design principles, concurrency, memory management, data structures, and algorithms
Excellent communication skills and a collaborative, team-first mindset
Bonus points for
Experience with major cloud providers (AWS, GCP, Azure) and managed Kubernetes solutions (EKS, GKE, AKS)
Familiarity with Kubernetes internals and container technologies
Background in cybersecurity, especially in cloud security domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8465996
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure team, youll help build data platform for our growing stack of products, customers, and microservices.

Our platform ingests and processes data from operational databases, telematics, and diverse product sources. Youll build robust backend services and data processing pipelines, leveraging state-of-the-art frameworks and cloud-native solutions, all while collaborating with Data Engineers, ML Engineers, Analysts, and Product Managers to turn real-world needs into resilient systems.

In this role youll:
Design & Develop Backend Services: Lead the design and implementation of backend systems and APIs for our distributed data platform

Build Data Ingestion & Processing: Architect and develop scalable ingestion pipelines with streaming ETL, Change Data Capture, and large-scale batch and stream processing

Own Data Platform Infrastructure: Implement and optimize scheduling, workflow orchestration, and data governance tools to support high-quality, compliant data flows

Drive Engineering Standards: Establish and promote backend best practices, ensuring high reliability, code quality, and maintainability across the team

Cross-functional Collaboration: Work closely with data engineers, ML engineers, product managers, and analysts to translate business needs into scalable backend systems

Mentorship: Share your backend and infrastructure expertise by collaborating, reviewing, and mentoring fellow engineers.
Requirements:
5+ years of experience as a Backend, Data, or Infrastructure Engineer building large-scale backend systems and data-driven platforms.

B.S. in Computer Science or a similar field

Proven backend development skills with expertise in Python. Additional languages, a plus.

Proven experience with distributed systems, microservices, building and maintaining robust backend APIs

Proficiency with databases (SQL, NoSQL), data modeling, and streaming data architectures.

Ability to work in an office environment a minimum of 3 days a week

Enthusiastic about learning and adapting to the rapidly evolving world of AI and data-driven engineering

Past experience with modern data stacks (e.g., Snowflake, Kafka, Airflow, DBT, Spark), an advantage

Strong understanding of cloud infrastructure and orchestration (preferably AWS, K8s, Terraform/Pulumi), an advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8421152
סגור
שירות זה פתוח ללקוחות VIP בלבד