דרושים » תוכנה » Real-Time AI Database Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Merkaz
Job Type: Full Time
abra R&D is looking for a Senior Embedded Engineer! abra R&D is looking for an Embedded Engineer to help build the first real-time database purpose-built for AI agents at scale. Designed around time-series and unstructured data, it leverages a custom storage format optimized for append-heavy, real-time workloads. Joining the team means working on a high-performance execution engine built with vectorized execution, SIMD, and cache-efficient memory layouts, enabling extreme low-latency and high-throughput performance at scale. The system is engineered to fully utilize modern hardware, with deep optimization across CPU cache layers (L1 / L2 / L3), pushing the limits of real-time data processing. What You’ll Do
* Design and implement core components of the database engine (query engine, execution engine, storage engine)
* Build vectorized execution pipelines optimized for SIMD
* Design and evolve a time-series optimized storage engine (custom on-disk + in-memory formats)
* Work on unstructured / event-driven data representations and efficient indexing/querying strategies
* Own memory layout, compression, and data defragmentation
* Develop cache-aware / cache-efficient data structures with deep understanding of CPU cache behavior
* Implement distributed data primitives: sharding, partitioning, replication, and data locality
* Profile and optimize performance at the CPU level (cache misses, branch prediction, memory bandwidth)
Requirements:
* 7+ years of experience in high-performance systems programming (C/C++)
* Deep understanding of computer architecture (CPU pipelines, cache hierarchies, memory access patterns)
* Strong experience with low-level optimization and profiling tools
* Proven knowledge of multithreaded development (lock and lockfree)
* Expertise in algorithms and data structures, especially cache-aware designs
* Experience in one or more of the following:
* Database internals (query engines, storage engines, query planners/optimizers)
* Time-series or real-time data systems
* High-performance systems (trading systems, game engines, networking stacks, compilers)
* Distributed systems (sharding, partitioning, consistency models) Strong Plus
* Experience with vectorized execution engines (e.g., DuckDB-style processing)
* Experience designing custom storage formats or low-level data layouts
* Experience handling unstructured or semi-structured data at scale
* Background in query optimization and execution planning
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8610884
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Herzliya
Job Type: Full Time
abra R&D is looking for a Senior Embedded Engineer! abra R&D is looking for an Embedded Engineer to help build the first real-time database purpose-built for AI agents at scale. Designed around time-series and unstructured data, it leverages a custom storage format optimized for append-heavy, real-time workloads. Joining the team means working on a high-performance execution engine built with vectorized execution, SIMD, and cache-efficient memory layouts, enabling extreme low-latency and high-throughput performance at scale. The system is engineered to fully utilize modern hardware, with deep optimization across CPU cache layers (L1 / L2 / L3), pushing the limits of real-time data processing. What You’ll Do
* Design and implement core components of the database engine (query engine, execution engine, storage engine)
* Build vectorized execution pipelines optimized for SIMD
* Design and evolve a time-series optimized storage engine (custom on-disk + in-memory formats)
* Work on unstructured / event-driven data representations and efficient indexing/querying strategies
* Own memory layout, compression, and data defragmentation
* Develop cache-aware / cache-efficient data structures with deep understanding of CPU cache behavior
* Implement distributed data primitives: sharding, partitioning, replication, and data locality
* Profile and optimize performance at the CPU level (cache misses, branch prediction, memory bandwidth)
Requirements:
* 7+ years of experience in high-performance systems programming (C/C++)
* Deep understanding of computer architecture (CPU pipelines, cache hierarchies, memory access patterns)
* Strong experience with low-level optimization and profiling tools
* Proven knowledge of multithreaded development (lock and lockfree)
* Expertise in algorithms and data structures, especially cache-aware designs
* Experience in one or more of the following:
* Database internals (query engines, storage engines, query planners/optimizers)
* Time-series or real-time data systems
* High-performance systems (trading systems, game engines, networking stacks, compilers)
* Distributed systems (sharding, partitioning, consistency models) Strong Plus
* Experience with vectorized execution engines (e.g., DuckDB-style processing)
* Experience designing custom storage formats or low-level data layouts
* Experience handling unstructured or semi-structured data at scale
* Background in query optimization and execution planning
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8610809
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/03/2026
Location: Haifa
Job Type: Full Time
As an ML Software Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of machine learning inference systems. Youll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. Youll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering next-generation AI products.This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.

Responsibilities
Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
Contribute to ComfyUI and internal infrastructure to improve the usability and performance of model execution flows
Investigate performance bottlenecks at all levels of the stack-from Python to kernel-level execution
Navigate and enhance a large, complex, production-grade codebase
Drive innovation in low-level system design to support future ML workloads
Requirements:
5+ years of experience in high-performance software engineering
Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
Proven track record of optimizing compute-intensive systems
Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
Collaborative and team-oriented mindset, with experience working across functional teams
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8591904
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/03/2026
Location: Jerusalem
Job Type: Full Time
As an ML Software Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of machine learning inference systems. Youll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. Youll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering next-generation AI products.This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.

Responsibilities
Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
Contribute to ComfyUI and internal infrastructure to improve usability and performance of model execution flows
Investigate performance bottlenecks at all levels of the stack-from Python to kernel-level execution
Navigate and enhance a large, complex, production-grade codebase
Drive innovation in low-level system design to support future ML workloads
Requirements:
5+ years of experience in high-performance software engineering
Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
Proven track record of optimizing compute-intensive systems
Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
Collaborative and team-oriented mindset, with experience working across functional teams
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8591920
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Jerusalem
Job Type: Full Time
you will be part of rem department, which is responsible for the automatic high-definition map-making process, a key technology in our autonomous driving and advanced driver-assistance systems. this process involves running advanced algorithmic code in a massively parallel way, utilizing Big Data technologies, and managing a complex system that requires both technical depth and strategic thinking. we are seeking a backend & data engineer to join our innovation team within our mapping division. this role is best suited for engineers with strong system -level thinking, a can-do approach, and a hands-on mindset, with the ability to design, build, and optimize complex systems operating at scale. what your job will look like:
develop and maintain backend and data -processing components in large-scale systems
design, implement, and optimize data pipelines and distributed processing flows
work with large-scale Storage systems (e.g., s3) and high-volume data access patterns
optimize systems and code across multiple layers - from architecture to implementation
identify performance bottlenecks, debug complex issues, and drive root-cause solutions
work across teams and domains, reading, improving, and refactoring existing code
take part in technical design and decision-making, balancing performance, scalability, and maintainability
Requirements:
all you need is:
5+ years of experience in software development, with a strong backend and/or data focus
experience building backend services (apis) and working with databases and Storage systems
experience using ai as a core part of the development workflow
hands-on experience with large-scale data processing and distributed systems
experience with spark / pyspark - a strong advantage
experience with Python - advantage
strong understanding of performance optimization and system behavior (cpu, memory, concurrency)
proven debugging skills and ability to move from symptoms to root cause
a strong can-do approach - proactive, hands-on, and not afraid to dive into complex systems changes the way we drive, from preventing accidents to semi and fully autonomous vehicles. if you are an excellent, bright, hands-on person with a passion to make a difference come to lead the revolution!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8579472
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Data Platform team, focused on building and evolving a secure, enterprise-grade Data Lake that powers large-scale global search, indexing, analytics, and AI-driven capabilities.
In this role, you will design and deliver scalable, compliant, and high-performance data pipelines that ingest, transform, and structure massive volumes of sensitive data to support mission-critical discovery and search workloads.
This position is ideal for a senior engineer who combines deep hands-on data engineering expertise with strong architectural thinking, particularly in regulated and security-sensitive environments. You will work closely with Product, Search, Backend, Security, and Data Science teams to ensure data is searchable, governed, reliable, and compliant by design.
Key Responsibilities:
Enterprise Data Lake Architecture:
Design and evolve a secure, scalable Data Lake architecture on AWS.
Define storage layout, partitioning strategies, and data organization optimized for large-scale search and analytics workloads.
Implement ACID-compliant table formats (e.g., Iceberg) to ensure reliability, consistency, and schema evolution.
Design ingestion patterns (batch and streaming) for high-volume, heterogeneous datasets.
Implement lifecycle management, retention policies, and environment isolation.
Global Search & Indexing Enablement:
Design data pipelines that prepare and structure data for global search and indexing systems.
Optimize data models and transformations to support high-performance search queries and distributed indexing.
Collaborate with search and backend teams to ensure efficient data availability and low-latency access patterns.
Support incremental ingestion, change-data-capture (CDC), and near real-time processing where required.
Ensure traceability and reproducibility of indexed datasets.
Secure & Regulated Data Engineering:
Implement strict access controls (IAM), encryption (at rest and in transit), and auditing mechanisms.
Ensure compliance with enterprise security and regulatory requirements.
Design systems with data lineage, traceability, and audit-readiness in mind.
Partner with Security and Compliance teams to support internal and external audits.
Handle sensitive and regulated datasets with strong governance and segregation controls.
Pipeline Development & Platform Engineering:
Build and maintain high-scale ETL/ELT pipelines using Apache Spark (EMR/Glue) and AWS-native services.
Leverage S3, Athena, Kinesis, Lambda, Step Functions, and EKS to support both batch and streaming workloads.
Implement Infrastructure as Code (Terraform / CDK / SAM) for reproducible environments.
Establish observability, monitoring, and SLA management for mission-critical pipelines.
Continuously optimize performance, scalability, and cost efficiency.
Cross-Functional Collaboration:
Work closely with Product Managers to translate global search and discovery requirements into scalable data solutions.
Collaborate with ML and Data Science teams to enable feature extraction and enrichment pipelines.
Contribute to architecture discussions and promote best practices in enterprise data engineering.
Provide documentation and clear technical artifacts for regulated environments.
דרישות:
Technical Expertise:
Strong hands-on experience with Apache Spark (EMR, Glue, PySpark).
Deep experience with AWS data services: S3, EMR, Glue, Athena, Lambda, Step Functions, Kinesis.
Proven experience designing and operating Data Lakes / Lakehouse architectures (Iceberg preferred).
Experience building scalable batch and streaming pipelines for large datasets.
Strong understanding of distributed systems and data modeling for search/indexing use cases.
Experience implementing secure, compliant data architectures (IAM, encryption, auditing).
Infrastructure as Code experience (Terraform / CDK / SAM).
Strong Python skills (TypeScript is a plus).
Enterprise & Search-Oriented Mindset המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600560
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
31/03/2026
Location: Herzliya
Job Type: Full Time
The Kusto team builds the engine that powers real-time log analytics and big data exploration at massive scale across our company and beyond. You'll be joining the core engine team as a Senior/Principal Engineer, responsible for the technology that processes dozens of petabytes daily across hundreds of our company services and Azure's Top 100 enterprise customers.
Our engine is the foundation for critical products including Azure Data Explorer, our company Fabric's Real-Time Intelligence, Azure Log Analytics, and our company Defender for Cloud. It enables internal observability for hundreds of our company services and delivers lightning-fast analytics for organizations worldwide.
As a Senior/Principal Engineer on this team, you'll lead projects that advance the core Kusto engine by driving new capabilities, optimizing distributed query execution, enhancing ingestion, and solving complex engineering challenges in performance and scalability at petabyte scale.
Responsibilities
Partner with internal and external stakeholders to determine customer requirements, gather feedback, and ensure the engine meets their needs
Own and lead architectural discussions for complex aspects of projects within the Kusto engine and drive dependency identification and the design
Collaborate with partner teams across our company to ensure integration, testing, scalability, performance standards, and live-site coverage
Design and spearhead implementation of key projects in the Kusto engine core, from conception through delivery
Enhance existing engine components for even better performance, scalability, and maintainability
Provide technical leadership and mentorship to engineers, guiding them to produce maintainable, secure, performant code and establish test strategies that ensure quality
Lead collaboration with geographically distributed teams across our company and cross-functional disciplines.
Requirements:
Bachelor's Degree in Computer Science or related technical field / equivalent training.
6+ years technical engineering experience with coding in languages including, but not limited to, RUST, C++, C#.
Additional Qualifications:
Experience in designing and implementing large scale distributed cloud services.
Deep knowledge of database and storage technologies.
Experience with performance optimizations of code and algorithms.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8598709
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Ra'anana
Job Type: Full Time
We are actively seeking a highly skilled Senior Software Engineer to join our dynamic team. This role is pivotal for a professional who specializes in designing and building scalable, cloud-native, multi-tenant SaaS infrastructure and backend systems.

Your primary responsibility will be to architect and develop robust backend services, high-throughput data pipelines, and scalable microservices. While your core focus will be on the backend and cloud infrastructure, you will take ownership of the full feature lifecycle-which includes a strong willingness to develop and maintain clear, efficient client-side interfaces to deliver complete end-to-end solutions.
Responsibilities:
End-to-End Ownership: You will take full ownership of the software lifecycle-from architectural design and backend coding to automated deployment and production monitoring. You will champion a 'You Build It, You Run It' culture, ensuring the high availability and observability of our multi-tenant SaaS environment.
Backend & Infrastructure Focus: Design and implement high-performance backend tasks, robust infrastructure, and high-throughput Backend-for-Frontend (BFF) layers utilizing both REST and gRPC protocols to process real-time sensor data and telemetry.
Full-Stack Delivery: Seamlessly transition to front-end development (React) when required, ensuring the backend infrastructure connects smoothly to a streamlined and user-friendly web client.
Automation & CI/CD: Drive feature delivery using Agile methodologies, replacing manual handovers with automated CI/CD pipelines that ensure seamless integration and validation from development to production.
Requirements:
B.Sc. in Computer Science from a leading university OR Alumnus of an elite military technology unit - MUST.
At least 5 years of hands-on backend software development experience, primarily in Node.js - A MUST.
Deep knowledge of the JS event loop and asynchronous programming. - A MUST.
Practical experience with frontend development (e.g., React) and a strong willingness to contribute to client-side development when necessary to deliver end-to-end features - A MUST.
At least 5 years with Cloud Computing across at least one of the big three providers (AWS, Azure, GCP), with a focus on Cloud-Native services. This includes expertise in serverless computing, managed container orchestration (EKS/AKS/GKE), and auto-scaling strategies beyond basic IaaS/VM management - A MUST.
Proven experience designing scalable services using Microservices Architecture and related patterns (e.g., Service Mesh, API Gateway, BFF).
Strong understanding of Event-Driven Design (e.g., high-throughput message brokers like Kafka, RabbitMQ, SQS) and Domain Driven Design (DDD) principles.
Experience with high-speed in-memory state management (e.g., Redis) and relational data stores (e.g., PostgreSQL).
Hands-on experience with containerization technologies, specifically Docker and basic Kubernetes usage.
Deep understanding of Infrastructure as Code (IaC) principles. Proven experience managing cloud infrastructure programmatically using tools like Terraform, AWS CDK, or Pulumi (we treat infrastructure as software) - A Big Advantage.
Deep expertise in designing highly secure, multi-tenant SaaS solutions, ideally with knowledge of Zero Trust Architecture (ZTA), mTLS, and secure edge-to-cloud communications.
Strong understanding of CI/CD concepts and automation tools.
Team player, with strong communication, collaboration, and active listening.
Agile/Scrum environment expertise.

Experience designing data pipelines for high-volume time-series telemetry and implementing long-term, cost-optimized data retention strategies (e.g., Object storage tiering).
Experience with Geospatial processing (GIS), mapping technologies (Map tile services, GeoJSON, OGC standards), and implementing spatial rules/geo-fencing.
Advanced Kubernetes experience (Operators, CRDs, Helm) for complex deployment scenarios.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595178
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
27/03/2026
חברה חסויה
Location: Yokne`am
Job Type: Full Time
in this role, you will help build and evolve systems that support performance analysis, telemetry, and optimization for large-scale gpu- and cpu-based clusters used in ai and high-performance computing environments. you will work closely with hardware, networking, firmware, and software teams to collect, analyze, and interpret performance data from live systems. this is a fast-paced r&d environment where system behavior and requirements evolve rapidly, requiring adaptable engineering solutions and strong analytical thinking.
what youll be doing:
profile, benchmark, and analyze ai and hpc workloads on gpu and cpu clusters
explore performance characteristics of high-performance networking and collective communications (e.g., nccl, rdma, mpi, roce)
identify performance bottlenecks across networking, compute, memory, and system architecture
develop and enhance performance analysis, benchmarking, and diagnostic tools
define performance TEST plans and establish expectations for new technologies and platforms
collaborate across hardware, firmware, networking, systems, and software teams to provide actionable performance insights
support telemetry collection and data refinement efforts to enable accurate performance analysis
maintain high standards for  data quality, reproducibility, and traceability of performance results
Requirements:
what we need to see:
b.sc. or m.sc. in Computer Science, computer engineering, software engineering, or equivalent experience
5+ years of experience in performance analysis, systems engineering, or hpc/ai infrastructure
demonstrated expertise in performance analysis skills and methodologies
hands-on experience with high-performance networking (rdma, mpi, nccl, congestion control)
strong understanding of  system performance metrics (latency, throughput, resource utilization)
exposure to hardware, firmware, or Embedded telemetry environments
strong analytical, problem-solving, and communication skills
ability to work effectively in cross-functional, fast-paced r&d teams
ways to stand out from the crowd:
knowledge of cuda, nccl internals, and congestion control algorithms
deep system -level understanding of cpu architectures, gpus, hcas, memory, and pcie
experience with nvidia gpus, cuda, and deep learning frameworks such as pytorch or tensorflow
experience with cloud platforms 
proficiency in  Python ; experience with bash and C / C ++ is a plus as well as a strong experience working in  Linux environments
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594112
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer
About the role:
As a Senior Software Engineer on the Data Platform, youll be part of one of our most strategic engineering groups, tasked with building the core data ingestion and processing infrastructure that powers our entire platform. The team is responsible for handling billions of cloud signals daily, ensuring scalability, reliability, and efficiency across our architecture.
Youll work on large-scale distributed systems, own critical components of the cloud security data pipeline, and drive architectural decisions that influence how data is ingested, normalized, and made available for product teams across. Were currently in the midst of a major architectural transformation, evolving our ingestion and processing layers to support real-time pipelines, improved observability, and greater horizontal scalability, and were looking for experienced engineers who are eager to make foundational impact!
Our Stack: Python, Go, Rust, SingleStore, Postgres, ElasticSearch, Redis, Kafka, AWS
On a typical day youll:
Write clean, concise code that is stable, extensible, and unit-tested appropriately
Write production-ready code that meets design specifications, anticipates edge cases, and accounts for scalability
Diagnose complex issues, evaluate, recommend and execute the best solution
Implement new requirements within our Agile delivery methodology while following our established architectural principles
Lead initiatives end to end from design and planning to implementation and deployment, while aligning cross-functional teams and ensuring technical excellence
Test software to ensure proper and efficient execution and adherence to business and technical requirements
Provides input into the architecture and design of the product; collaborating with the team in solving problems the right way
Develop expertise of AWS, Azure, and GCP products and technologies.
Requirements:
About you:
Bachelors degree in Computer Science, Engineering or relevant experience
5+ years of professional software development experience
Proven experience building data-intensive systems at scale
Experience in working with micro-service architecture & cloud-native services
Solid understanding of software design principles, concurrency, synchronization, memory management, data structures, algorithms, etc
Hands-on experience with databases such as SingleStore, Postgres, Elasticsearch, Redis
Experience with Python / Go (Advantage)
Experience with distributed data processing tools like Kafka (Advantage).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588626
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer - CWPP Team
About the role:
As a Senior Software Engineer on the CWPP (Cloud Workload Protection Platform) team, youll be part of a core group responsible for protecting our customers cloud workloads at scale. Our team scans and analyzes a wide variety of cloud assets - from virtual machines and container images to object storage buckets and databases, to uncover vulnerabilities, secrets, misconfigurations, and other security risks.
This role offers a unique opportunity to build the infrastructure that powers our deep forensics engine. Youll help reconstruct customer file systems across multiple operating systems, extract OS-level and application-level metadata, and enable our security researchers to detect threats and vulnerabilities quickly and reliably. Your work will directly impact the safety and security of thousands of cloud environments worldwide.
Were looking for engineers who are passionate about operating system internals, large-scale distributed systems, and cloud security, and who want to make a meaningful impact by building robust, high-performance security platforms. This is also a great opportunity to take ownership, lead initiatives, and grow through collaboration, mentorship, and technical leadership.
Our Stack: Python, Linux & Windows internals, Container Runtimes, Postgres, Redis, Kafka, AWS, GCP, Azure
On a typical day youll:
Design, implement, and maintain scalable backend services for scanning and analyzing cloud workloads (VMs, containers, buckets, databases, etc..)
Build infrastructure for reconstructing file systems across different operating systems (Linux and Windows) to enable deep analysis
Integrate security detection engines for vulnerabilities, secrets, compliance, and malware
Collaborate with security researchers and product managers to translate complex requirements into impactful product features
Write clean, efficient, and testable code, ensuring high performance and reliability
Participate in design and code reviews to uphold technical excellence and team standards
Lead features end-to-end - from design and planning to deployment and monitoring
Improve system observability, performance, and resilience in production environments
Stay current with developments in the cloud security landscape, vulnerability management, and OS internals.
Requirements:
Bachelors degree in Computer Science, Engineering, or equivalent experience
5+ years of professional software development experience
Strong experience building backend services or distributed systems
Hands-on experience with Python
Solid understanding of operating system internals
Familiarity with vulnerability management concepts or tooling
Experience with major cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure)
Strong foundation in software design principles, concurrency, memory management, data structures, and algorithms
Passionate about building great products and solving real-world security challenges
Self-driven, proactive, and comfortable taking ownership and initiative
A strong communicator and a true team player who thrives in a collaborative environment
Bonus points for having:
Familiarity with container internals and runtime security
Experience with large-scale file system analysis, malware analysis, or digital forensics
Background in cybersecurity, especially in cloud security domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588615
סגור
שירות זה פתוח ללקוחות VIP בלבד