דרושים » תוכנה » Senior Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
06/01/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer.
As a Senior Data Engineer you will be helping us design and build a flexible and scalable system that will allow our business to move fast and innovate. You will be expected to show ownership and responsibility for the code you write, but it doesn't stop there.
you are encouraged to think big and help out in other areas as well.
Key focuses:
Designing and writing code that is critical for business growth
Mastering scalability and enterprise-grade SAAS product implementation
Sense of ownership - leading design for new products and initiatives as well as integrating with currently implemented best-practices
Review your peer's design and code
Work closely with product managers, peer engineers, and business stakeholders
Requirements:
5+ years of experience as a hands on software engineer (Python, TypeScript, Node.JS)
Hands on experience in managing major clouds vendors infrastructure (AWS, GCP, Azure)
Hands on experience in designing and implementing data pipelines, distributed systems and restful APIs
Proficiency with SQL, modeling and working with relational and non databases, and pushing them past their limits
Experience working with CI/CD systems, Docker and orchestration tools such as Kubernetes
Enjoy communicating and collaborating, sharing your ideas and being open to honest feedback
The ability to lead new features from design to implementation, taking into consideration topics such as performance, scalability, and impact on the greater system
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8490187
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/01/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Your Career We are looking for a highly talented technical individual to join the Cortex XDR infrastructure team. The team is responsible for developing automation infrastructure and various cloud based tools and platforms that are used across the research, development and QA departments to ensure the functionality, stability and quality of the XDR product, alongside the efficiency of the infrastructure and process used to build, test and deploy on various clouds and distributions. We believe that the platforms & infrastructure that the team provides are a critical & crucial part of our department's progress to the modern future and one of our key growth factors. As a Platform engineer you will play a pivotal role in enhancing our development and automation experience by pushing forward modern automation approaches, eliminating manual efforts and introducing new development operations for continuous integration, scale & durability using advanced cloud services. Your expertise will be used in areas such as infrastructure development, cloud based automation, serverless infrastructure, automation as a service, providing technical guidance, and pushing infrastructure\configuration as a code and GitOps approach across the development departments. To succeed in this role, you should have a strong foundation in modern cloud based automation methodologies and a comprehensive understanding of industry best practices, especially in redundancy and scalability of large systems and the ability to control them via SCM based declarative configs. You should be familiar with modern public clouds approaches and serverless based architectures, including virtualization containers and container based orchestration including multiple Kubernetes based deployments. You should be comfortable engaging in complex technical discussions and advocating for optimal solutions in a fast-paced growing environment as part of our quest for continuous improvement. Your Impact Utilize modern technologies including serverless cloud services, Kubernetes, Terraform, among others, and use them all in an infrastructure/configuration as a code GItOps approach to manage everything via source code and continuous integration processes Design and implement (hands on) the next generation of platforms, automation frameworks, SDKs, and tools to be used across our entire R&D group, and be part of our infrastructure transition to the cloud Develop and maintain a cloud based test execution system, that supports parallel executions on multiple operating systems and multiple cloud providers and at a very large scale, and by so helping reduce the amount of effort required to perform automatic testing and manual testing, and reduce time to market Provide tools, systems and simulators for scaling up all lifecycle phases of our products and services including cross company and third party integrations and frameworks to be used in high scale Introduce progress and help revolutionize our operations and lay the foundation for innovation and growth.
Requirements:
Your Experience At least 4 years of hands-on experience as one of the following - Platform/InfraOps Engineer, DevOps , Cloud Infrastructure Engineer or equivalent Hands-on experience working with cloud services in big public Clouds (Azure, AWS, GCP) Experience with designing and implementing cloud based infrastructure (especially serverless components), alongside using infrastructure as Code tools such as Terraform and Pulumi to automatically build and maintain the provisioned cloud infrastructure Strong programming skills in Python (or another high level language), with vast experience in Object-Oriented Programming, including Design Patterns, Algorithms and Data Structures Strong experience with containerization technologies (docker, containerd) and orchestration , especially with various Kubernetes deployments, both self-managed and cloud managed deployments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8521930
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Software Engineer
Were looking for driven and talented people like you to join our R&D team and our mission to change the future of cloud security. Ready to dive in and swim with our pod?
Highlights:
High-growth: Over the past seven years, weve consistently achieved milestones that take other companies a decade or more. During this time, weve significantly grown our employee base, expanded our customer reach, and rapidly advanced our product capabilities.
Disruptive innovation: Our founders saw that traditional security didnt work for the cloud, so they set out to carve a new path. Were relentless pioneers who invented agentless technology and continue to be the most comprehensive and innovative cloud security company.
Well-capitalized: With a valuation of $1.8 billion, Orca is a cybersecurity unicorn dominating the cloud security space. Were backed by an impressive team of investors such as Capital G, ICONIQ, GGV, and SVCI, a syndicate of CISOs who invest their own money after conducting their due diligence.
Respectful and transparent culture: Our executives pride themselves on being accessible to everyone and believe in sharing knowledge with the employees. Each employee has a place in shaping the future of our industry.
About the role:
As a Senior Software Engineer on the Data Platform, youll be part of one of Orcas most strategic engineering groups, tasked with building the core data ingestion and processing infrastructure that powers our entire platform. The team is responsible for handling billions of cloud signals daily, ensuring scalability, reliability, and efficiency across Orcas architecture.
Youll work on large-scale distributed systems, own critical components of the cloud security data pipeline, and drive architectural decisions that influence how data is ingested, normalized, and made available for product teams across Orca. Were currently in the midst of a major architectural transformation, evolving our ingestion and processing layers to support real-time pipelines, improved observability, and greater horizontal scalability, and were looking for experienced engineers who are eager to make foundational impact!
Our Stack: Python, Go, Rust, SingleStore, Postgres, ElasticSearch, Redis, Kafka, AWS
On a typical day youll:
Write clean, concise code that is stable, extensible, and unit-tested appropriately
Write production-ready code that meets design specifications, anticipates edge cases, and accounts for scalability
Diagnose complex issues, evaluate, recommend and execute the best solution
Implement new requirements within our Agile delivery methodology while following our established architectural principles
Lead initiatives end to end from design and planning to implementation and deployment, while aligning cross-functional teams and ensuring technical excellence
Test software to ensure proper and efficient execution and adherence to business and technical requirements
Provides input into the architecture and design of the product; collaborating with the team in solving problems the right way
Develop expertise of AWS, Azure, and GCP products and technologies.
Requirements:
Bachelors degree in Computer Science, Engineering or relevant experience
5+ years of professional software development experience
Proven experience building data-intensive systems at scale
Experience in working with micro-service architecture & cloud-native services
Solid understanding of software design principles, concurrency, synchronization, memory management, data structures, algorithms, etc
Hands-on experience with databases such as SingleStore, Postgres, Elasticsearch, Redis
Experience with Python / Go (Advantage)
Experience with distributed data processing tools like Kafka (Advantage).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533709
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object store-backed data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8512434
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for driven and talented people like you to join our R&D team and our mission to change the future of cloud security. Ready to dive in and swim with our pod?
Highlights:
High-growth: Over the past six years, weve consistently achieved milestones that take other companies a decade or more. During this time, weve significantly grown our employee base, expanded our customer reach, and rapidly advanced our product capabilities.
Disruptive innovation: Our founders saw that traditional security didnt work for the cloud, so they set out to carve a new path. Were relentless pioneers who invented agentless technology and continue to be the most comprehensive and innovative cloud security company.
Well-capitalized: With a valuation of $1.8 billion, we are a cybersecurity unicorn dominating the cloud security space. Were backed by an impressive team of investors such as Capital G, ICONIQ, GGV, and SVCI, a syndicate of CISOs who invest their own money after conducting their due diligence.
Respectful and transparent culture: Our executives pride themselves on being accessible to everyone and believe in sharing knowledge with the employees. Each employee has a place in shaping the future of our industry.
About the role:
As a Senior Software Engineer on the Cloud Platforms & Orchestration team, youll be part of one of our most strategic engineering teams, tasked with building the core orchestration infrastructure that powers our entire platform.
This includes, but is not limited to, processes orchestration and management, high-volume daily scan handling, and performance optimization.
Key responsibilities also involve ensuring data integrity, developing new scanning features, and overseeing the customer experience regarding scan visibility, onboarding experience, and cloud environment management.
Youll work on large-scale distributed systems, own critical components of the cloud security data pipeline, and drive architectural decisions.
Were looking for experienced engineers who are eager to make a foundational impact!
Our Stack: Python, Django, Postgres, Redis, Kafka, Terraform and many more AWS services such as : SQS, DynamoDB, CloudFormation, EC2 and more.
On a typical day youll:
Write clean, concise code that is stable, extensible, and unit-tested appropriately.
Write production-ready code that meets design specifications, anticipates edge cases, and accounts for scalability.
Diagnose complex issues, evaluate, recommend and execute the best solution Implement new requirements within our Agile delivery methodology while following our established architectural principles.
Lead initiatives end to end from design and planning to implementation and deployment, while aligning cross-functional teams and ensuring technical excellence.
Test software to ensure proper and efficient execution and adherence to business and technical requirements
Provides input into the architecture and design of the product, collaborating with the team in solving problems the right way.
Develop expertise of AWS, Azure, and GCP and other cloud providers and technologies.
Requirements:
Bachelors degree in Computer Science, Engineering or relevant experience.
5+ years of professional software development experience.
Proven experience building data-intensive systems at scale.
Experience in working with micro-service architecture & cloud-native services.
Solid understanding of software design principles, concurrency, synchronization, memory management, data structures, algorithms, etc.
Experience with distributed data processing tools like SQS, Kafka.
Hands-on experience with databases such as Postgres, Redis (Advantage).
Experience with Python (Advantage).
Experience with IaC tools like Terraform, CloudFormation (Advantage).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8533576
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled and motivated Senior II Software Engineer to join the Operational Experience engineering team. The team is part of the Customer Experience group, which is responsible for the platform, tools, and customer-facing experiences that power how our customers interact with our ecosystem. This is a high-impact, hands-on role, in which youll be working closely with product managers, designers, customer-facing teams, and engineering partners across the company.

You will operate at the intersection of backend engineering, data-intensive systems, platform development, and customer experience. The ideal candidate brings strong expertise in Node.js and TypeScript, along with deep experience working with large-scale data stores, event-driven pipelines, data models, and high-throughput infrastructure. You will work closely with cross-functional partners to design and implement robust backend services, data-access patterns, and operational workflows that power the portal and internal tools. As we invest heavily in Agentic AI, you will also play a central role in shaping and implementing AI-driven capabilities across the platform. While the role is primarily backend, you will occasionally contribute across the full stack when it supports end-to-end delivery.

If you enjoy owning complex problems end to end, improving systems at scale, and building experiences that bring real value to customers, we would love to meet you.

What you'll be doing:
Drive technical direction and architecture within the OX team and across the broader CX organization. You will proactively identify opportunities to improve performance, resilience, cost, scalability, and developer experience, primarily in backend systems but with influence across the stack.
Lead the development of AI-driven and Agentic AI capabilities. Define how LLMs integrate into our platform, build AI-powered workflows, and establish strong engineering patterns for safe and reliable adoption.
Own and evolve the data foundations behind the portal. Optimize pipelines, improve data quality and freshness, and design resilient data-access patterns across Snowflake, Elasticsearch, Kafka, Redis, MySQL, and related systems.
Work closely with product, design, customer-facing teams, and partner engineering groups. Turn ambiguous problems into clear execution plans and ensure alignment with customer and business goals.
Shape shared standards and platform best practices. Guide other teams on backend services, data integration patterns, portal development approaches, and AI-enabled workflows.
Mentor and elevate engineers across the CX group. Promote engineering excellence, share knowledge openly, and help teams adopt effective modern development practices.
Own delivery of high-impact initiatives. Contribute hands-on when needed, remove blockers, maintain execution momentum, and drive projects from concept to production.
Requirements:
What you'll need:
6+ years of experience as a software engineer with strong expertise in backend development using Node.js and TypeScript, with the ability to work across the stack when needed.
Experience building customer-facing products and working closely with product managers, designers, and customer-facing stakeholders.
Strong familiarity with cloud-native environments. AWS experience is a significant advantage.
Hands-on experience with distributed systems, event-driven architectures, and datastores such as Redis, Kafka, SQS, Elasticsearch, MySQL, and Snowflake.
Demonstrated impact in senior engineering roles. You have led complex technical initiatives, influenced product decisions, and helped drive architecture across teams.
Deep systems thinking with the ability to design and scale robust, performant, and maintainable services.
Excellent communication and collaboration skills. You can discuss architecture with engineers, roadmap with product managers, and explain tradeoffs to non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8530019
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
What youll do:
Lead the design and development of scalable and efficient data lake solutions that account for high-volume data coming from a large number of sources both pre-determined and custom.
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs.
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into a data lake that will serve our users.
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency.
Collaborate with cross-functional teams (product, analytics, and R&D) to enhance our data solutions.
Who youll work with:
Youll be joining a collaborative and dynamic team of talented and experienced developers where creativity and innovation thrive.
You'll closely collaborate with our dedicated Product Managers and Designers, working hand in hand to bring our developer portal product to life.
Additionally, you will have the opportunity to work closely with our customers and engage with our product community. Your insights and interactions with them will play an important role to ensure we deliver the best product possible.
Together, we'll continue to empower platform engineers and developers worldwide, providing them with the tools they need to create seamless and robust developer portals. Join us in our mission to revolutionize the developer experience!
Requirements:
5+ years of experience in a Data Engineering role
Expertise in building scalable pipelines and ETL/ELT processes, with proven experience with data modeling
Expert-level proficiency in SQL and experience with large-scale datasets
Strong experience with Snowflake
Strong experience with cloud data platforms and storage solutions such as AWS S3, or Redshift
Hands-on experience with ETL/ELT tools and orchestration frameworks such as Apache Airflow and dbt
Experience with Python and software development
Strong analytical and storytelling capabilities, with a proven ability to translate data into actionable insights for business users
Collaborative mindset with experience working cross-functionally with data engineers and product managers
Excellent communication and documentation skills, including the ability to write clear data definitions, dashboard guides, and metric logic
Advantages:
Experience in NodeJs + Typescript
Experience with streaming data technologies such as Kafka or Kinesis
Familiarity with containerization tools such as Docker and Kubernetes
Knowledge of data governance and data security practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8526151
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/01/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Your Mission As a Senior Data Engineer, your mission is to build the scalable, reliable data foundation that empowers us to make data-driven decisions. You will serve as a bridge between complex business needs and technical implementation, translating raw data into high-value assets. You will own the entire data lifecycle-from ingestion to insight-ensuring that our analytics infrastructure scales as fast as our business.

Key Responsibilities:
Strategic Data Modeling: Translate complex business requirements into efficient, scalable data models and schemas. You will design the logic that turns raw events into actionable business intelligence.
Pipeline Architecture: Design, implement, and maintain resilient data pipelines that serve multiple business domains. You will ensure data flows reliably, securely, and with low latency across our ecosystem.
End-to-End Ownership: Own the data development lifecycle completely-from architectural design and testing to deployment, maintenance, and observability.
Cross-Functional Partnership: Partner closely with Data Analysts, Data Scientists, and Software Engineers to deliver end-to-end data solutions.
Requirements:
What You Bring:
Your Mindset:
Data as a Product: You treat data pipelines and tables with the same rigor as production APIs-reliability, versioning, and uptime matter to you.
Business Acumen: You dont just move data; you understand the business questions behind the query and design solutions that provide answers.
Builders Spirit: You work independently to balance functional needs with non-functional requirements (scale, cost, performance).
Your Experience & Qualifications:
Must Haves:
6+ years of experience as a Data Engineer, BI Developer, or similar role.
Modern Data Stack: Strong hands-on experience with DBT, Snowflake, Databricks, and orchestration tools like Airflow.
SQL & Modeling: Strong proficiency in SQL and deep understanding of data warehousing concepts (Star schema, Snowflake schema).
Data Modeling: Proven experience in data modeling and business logic design for complex domains-building models that are efficient and maintainable.
Modern Workflow: Proven experience leveraging AI assistants to accelerate data engineering tasks.
Bachelors degree in Computer Science, Industrial Engineering, Mathematics, or an equivalent analytical discipline.
Preferred / Bonus:
Cloud Data Warehouses: Experience with BigQuery or Redshift.
Coding Skills: Proficiency in Python for data processing and automation.
Big Data Tech: Familiarity with Spark, Kubernetes, Docker.
BI Integration: Experience serving data to BI tools such as Looker, Tableau, or Superset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8511741
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/01/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Your Career In a world where remote work is the new norm, organizations perimeters are much more loosely defined and cloud-native apps replace data centers rapidly, a new approach is needed to provide connectivity, compliance and security for all. As a Senior Backend Engineer - you will build and design distributed backend services that are the backbone of our platform. You will need to think wide about all system components and will need to consider the trade-offs with every design decision you make. You will work with the latest technologies and development methodologies to achieve the best outcomes. This is a unique opportunity to join very early and take charge of new product architecture and build it from scratch. Your Impact Responsible for complete software development life cycle including requirement analysis, design, development, deployment and support Design, implement and test critical components in the product, take into account complex considerations of multiple platforms, performance, supportability, maintainability and much more Write clean, testable, readable, scalable and maintainable code that scales and performs well for thousands of customers Develop solid understanding and be able to explain advanced Cloud Computing and Security concepts to others Research new technologies and their implications on connectivity and security. Then adapt them for use in the companys products Write design documents, SW development guidelines, and best practices.
Requirements:
Your Experience 3+ years experience in building complex, high scale SaaS solutions - Preferably experienced in Golang Passion for software engineering and coding - Energetic and eager to create and outperform Experience in developing cloud distributed applications and cloud infrastructures Strong computer science fundamentals Proven record designing and implementing scalable REST APIs, services and data pipelines Hands-on experience using SQL/NoSQL based databases Understanding of microservices-based deployments with the ability to introduce monitoring/tracing of application logs (e.g. Splunk) from inception Familiarity with one or more cloud platforms, such as AWS, Azure, GCP, Kubernetes and their technologies (Lambda functions, SNS/SQS etc.) Experience with Kubernetes/Docker - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8522156
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
דרישות:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8498339
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Backend Software Engineer.
As a Backend Software Engineering, your job responsibilities will include:
Build new and exciting components in an ever-growing and evolving market technology to provide scale and efficiency.
Develop high-quality, production-ready code that can be used by millions of users of our cloud platform.
Make design decisions on the basis of performance, scalability, and future expansion.
Work in a Hybrid Engineering model and contribute to all phases of SDLC including design, implementation, code reviews, automation, and testing of the features.
Build efficient components/algorithms on a microservice multi-tenant SaaS cloud environment
Code review, mentoring junior engineers, and providing technical guidance to the team (depending on the seniority level)
JR316760
Requirements:
5+ years of development experience as a software engineer.
Deep knowledge of object-oriented programming and other scripting languages: Java, Python, Scala C#, Go, Node.JS and C++.
Strong SQL skills and experience with relational and non-relational databases e.g. (Postgress/Trino/redshift/Mongo).
Experience with developing SAAS products over public cloud infrastructure - AWS/Azure/GCP.
Proven experience designing and developing distributed systems at scale.
Proficiency in queues, locks, scheduling, event-driven architecture, and workload distribution, along with a deep understanding of relational database and non-relational databases.
A deeper understanding of software development best practices and demonstrate leadership skills.
Degree or equivalent relevant experience required. Experience will be evaluated based on the core competencies for the role (e.g. extracurricular leadership roles, military experience, volunteer roles, work experience, etc.)
Desired Skills:
Technical expertise in Generative AI, particularly with RAG systems and Agentic workflows that use large language models.
Experience with Big-Data/ML and S3
Hands-on experience with Streaming technologies like Kafka
Experience with Elastic Search
Experience with Terraform, Kubernetes, Docker
Experience working in a high-paced and rapidly growing multinational organization
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8520363
סגור
שירות זה פתוח ללקוחות VIP בלבד