משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
5 טיפים לכתיבת מכתב מקדים מנצח
נכון, לא כל המגייסים מקדישים זמן לקריאת מכתב מק...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
We are searching for an innovative and experienced Data Engineer that will join us and be part of our reference and alternative data team in our data group.
As a Data Engineer, you will:
Be a part of a cross functional team of data and backend engineers.
Be responsible for ingesting financial data and providing it over numerous APIs in close collaboration with algorithmic teams and other partners.
Lead the architecture, planning, design and development of mission-critical and diverse data pipelines over both public and on-prem cloud solutions.
Requirements:
At least 5 years of experience working as a Data Engineer
At least 5 years of experience working in python development with emphasis on data analysis tools such as NumPy, pandas, polars, Jupyter notebook.
Hands-on experience working with AWS data processing tools and concepts.
Proven understanding in designing, developing and optimizing complex solutions.
Proven experience with the following technologies: Neo4j, MongoDB, Redis, Snowflake
Experience with Docker, Linux, CI/CD tools and concepts, Kubernetes.
Experience with data pipelining tools such as Airflow, Kubeflow or similar.
BSc / MSc degree in Computer Science/ Engineering / Mathematics or Statistics.
Advantages:
Hands-on experience with DataBricks platform.
Experience working on large scale and complex on-premises systems.
Hands-on experience in lower-level programming languages such as C++ or RUST
Familiarity with Capital markets and basic economics knowledge.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8547667
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a skilled and motivated Data Engineer with expertise in Elasticsearch, cloud technologies, and Kafka. As a data engineer, you will be responsible for designing, building and maintaining scalable and efficient data pipelines that will support our organization's data processing needs.
The role will entail:
Design and develop data platforms based on Elasticsearch, Databricks, and Kafka
Build and maintain data pipelines that are efficient, reliable and scalable
Collaborate with cross-functional teams to identify data requirements and design solutions that meet those requirements
Write efficient and optimized code that can handle large volumes of data
Implement data quality checks to ensure accuracy and completeness of the data
Troubleshoot and resolve data pipeline issues in a timely manner.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field
3+ years of experience in data engineering
Expertise in Elasticsearch, cloud technologies (such as AWS, Azure, or GCP), Kafka and Databricks
Proficiency in programming languages such as Python, Java, or Scala
Experience with distributed systems, data warehousing and ETL processes
Experience with Container environment such AKS\EKS\OpenShift is a plus
high security clearance is a plus.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8546147
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Location: Petah Tikva
Job Type: Full Time
We are looking for an experienced, opinionated, and highly technical Staff Data Engineer to define the long-term direction of our Data Platform.
This role is ideal for a senior data leader who combines deep hands-on expertise with strong architectural judgment, enjoys mentoring others, and takes full ownership of complex, high-scale data systems.
You will drive platform architecture, set engineering standards, and lead the design and delivery of scalable batch and streaming data solutions on AWS.
Key Responsibilities:
Technical Leadership & Architecture:
Own and evolve the overall data platform architecture-scalability, reliability, security, and maintainability.
Lead the long-term strategic direction of the platform, balancing performance, cost, and operational excellence.
Introduce and drive adoption of modern data paradigms: lakehouse (Iceberg), event-driven pipelines, schema-aware processing.
Data Lake & Architecture Ownership:
Design, model, and evolve the Data Lake architecture, including:
Storage layout and data organization
Data formats and table design (e.g., Iceberg)
Batch and streaming ingestion patterns
Schema governance and lifecycle policies
Define and promote best practices for data modeling, partitioning, and data quality.
Ensure the Data Lake supports analytics, ML workloads, and operational systems at scale.
Platform Engineering:
Design and build high-scale ETL/ELT pipelines leveraging Apache Spark (EMR/Glue) and AWS-native services.
Optimize production-grade pipelines using S3, Athena, Kinesis, Lambda, Step Functions, and EKS.
Lead the rollout of modern patterns such as lakehouse architectures and event-driven data pipelines.
Security & Governance:
Ensure alignment with AWS security best practices-IAM, encryption, auditing, and compliance frameworks.
Partner with security and governance teams to support regulated and sensitive data environments.
Mentorship & Collaboration:
Serve as a technical mentor to data engineers; elevate team capabilities.
Lead architecture reviews and cross-team design discussions.
Work closely with Data Science, ML Engineering, Backend, and Product teams to deliver end‑to‑end data solutions.
Requirements:
Technical Expertise:
Advanced experience with Apache Spark (EMR, Glue, PySpark).
Deep expertise in AWS data ecosystem: S3, EMR, Glue, Athena, Lambda, Step Functions, Kinesis.
Strong understanding of Data Lake and Lakehouse architectures.
Experience building scalable batch and streaming pipelines.
Hands-on experience with Infrastructure as Code (Terraform / CDK / SAM).
Python as a primary programming language (TypeScript is a plus).
Leadership Mindset:
Opinionated yet pragmatic; able to defend architectural trade-offs.
Strategic thinker capable of translating long-term vision into actionable roadmaps.
Strong end‑to‑end ownership mentality, from design to production operations.
Passionate about automation, simplicity, and scalable engineering.
Excellent communicator capable of explaining complex decisions to diverse stakeholders.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer).
Experience supporting ML pipelines or AI-driven analytics.
Familiarity with data governance, data mesh, or self‑service data platforms.
Experience working in regulated, security‑sensitive, or law‑enforcement domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545999
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Location: Petah Tikva
Job Type: Full Time
We are seeking a Senior Backend & Data Engineer to join its SaaS Data Platform team.
This role offers a unique opportunity to design and build large-scale, high-performance data platforms and backend services that power our cloud-based products.
You will own features end to end-from architecture and design through development and production deployment-while working closely with Data Science, Machine Learning, DevOps, and Product teams.
What Youll Do:
Design, develop, and maintain scalable, secure data platforms and backend services on AWS.
Build batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS.
Develop backend components and data-processing workflows in a cloud-native environment.
Optimize performance, reliability, and observability of data pipelines and backend services.
Collaborate with ML, backend, DevOps, and product teams to deliver data-powered solutions.
Drive best practices, code quality, and technical excellence within the team.
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Tech Stack:
AWS Services: S3, Lambda, Glue, Step Functions, Kinesis, Athena, EMR, Airflow, Iceberg, EKS, SNS/SQS, EventBridge
Languages: Python (Node.js/TypeScript a plus)
Data & Processing: batch & streaming pipelines, distributed computing, serverless architectures, big data workflows
Tooling: CI/CD, GitHub, IaC (Terraform/CDK/SAM), containerized environments, Kubernetes
Observability: CloudWatch, Splunk, Grafana, Datadog
Key Responsibilities:
Design, develop, and maintain scalable, secure backend services and data platforms on AWS
Build and operate batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS
Develop backend components and data processing workflows in a cloud-native environment
Optimize performance, reliability, and observability of data pipelines and backend services
Collaborate with ML, backend, DevOps, and product teams to deliver data-driven solutions
Lead best practices in code quality, architecture, and technical excellence
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing)
Requirements:
8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
Proven ability to design and implement scalable backend services and data pipelines
Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
Experience with Infrastructure as Code and automated deployment of data infrastructure
Strong debugging, testing, and performance-tuning skills in agile environments
High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer)
Experience with ML pipelines or AI-driven analytics
Familiarity with data governance, self-service data platforms, or data mesh architectures
Experience with PostgreSQL, DynamoDB, MongoDB
Experience building or consuming high-scale APIs
Background in multi-threaded or distributed system development
Domain experience in cybersecurity, law enforcement, or other regulated industries.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545956
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Job Type: Full Time
We are at a pivotal stage in building and scaling our data domain, and we are looking for a Data Engineer to join our growing BI team. This role goes beyond building pipelines. You will help shape our data platform as a shared product - supporting analytics, reporting, and decision-making across key company data domains such as Product, Sales, HR, and others. Your work will directly influence how stakeholders interact with data today and how the platform evolves in the years ahead.
What Youll Be Doing
Architect & Own: Lead the design and development of scalable data warehouse and BI solutions. You will make early-stage architectural decisions and own their long-term impact.
Infrastructure as a Product: Build core data infrastructure and developer experiences that others rely on, ensuring high availability and system reliability.
End-to-End ELT/ETL: Solve complex integration problems by sourcing data from structured and unstructured sources using Rivery, Python, and optimal ETL patterns.
Data Quality & Governance: Implement frameworks for schema evolution, anomaly detection, and data freshness. You will determine security models based on privacy requirements and evolve governance processes.
Strategic Collaboration: Partner with Engineers, Product Managers, and Data Analysts to conceptualize data needs and represent key insights in a meaningful way.
Optimization: Assist in owning production processes, optimizing complex code through advanced algorithmic concepts to manage operational cost-benefit tradeoffs.
Requirements:
Experience: 5+ years of experience in Data Engineering, Infrastructure, or Platform Engineering (ideally in organizations operating at a meaningful scale).
Technical Mastery: 5+ years of hands-on experience with Python and SQL. Deep proficiency in data modeling (Star/Snowflake schema) and DWH methodologies.
Cloud & Tools: Proven experience with Snowflake and AWS. Familiarity with Rivery or similar orchestration tools (like DBT) is a major advantage.
Production-First Mindset: Track record of leading data initiatives end-to-end from design and building to shipping and operating production flows.
Analytical Rigor: Ability to triage issues, resolve data quality problems, and design systems that handle system complexity with ease.
Education: Bachelors degree in Computer Science, Computer Engineering, or a relevant technical field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545923
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Location: Petah Tikva
Job Type: Full Time
Required Data Engineer
Position Overview:
We are assembling an elite, small-scale team of innovators committed to a transformative mission: advancing generative AI from conceptual breakthrough to tangible product reality. As a Senior Data Engineer, you will be the critical data backbone of our innovation engine, transforming raw data into the fuel that powers groundbreaking GenAI solutions, driving our digital intelligence capabilities to unprecedented heights.
Your Strategic Role:
You are not just a data engineer - you are a strategic enabler of GenAI innovation. Your primary mission is to:
Prepare, structure, and optimize data for cutting-edge GenAI project exploration
Design data infrastructures that support rapid GenAI prototype development
Uncover unique data insights that can spark transformative AI project ideas
Create flexible, robust data pipelines that accelerate GenAI research and development
What Sets This Role Apart:
Data as the Foundation of AI Innovation
You'll be working at the intersection of advanced data engineering and generative AI
Your data solutions will directly enable the team's ability to experiment with and develop novel AI concepts
Every data pipeline you design has the potential to unlock a breakthrough GenAI project
Exploration and Innovation
Conduct deep data exploration to identify potential GenAI application areas
Work closely with AI researchers to understand data requirements for cutting-edge GenAI projects.
Requirements:
Data Engineering Expertise:
Advanced skills in designing data architectures that support GenAI research
Ability to work with diverse, complex datasets across multiple domains
Expertise in preparing and transforming data for AI model training
Proficiency in creating scalable, flexible data infrastructure
Technical Capabilities:
Deep understanding of data requirements for machine learning and generative AI
Expertise in cloud-based data platforms
Advanced skills in data integration, transformation, and pipeline development
Ability to develop automated data processing solutions optimized for AI research
Research and Innovation Skills:
Proven ability to derive strategic insights from complex datasets
Creative approach to data preparation and feature engineering
Capacity to identify unique data opportunities for GenAI projects
Strong experimental mindset with rigorous analytical capabilities
Requirements
Degree in Computer Science, Data Science, or related field
5+ years of progressive data engineering experience
Demonstrated expertise in:
Cloud platforms (AWS, Google Cloud, Azure)
Big Data technologies
Advanced SQL and NoSQL database systems
Data pipeline development for AI/ML applications
Performance optimization techniques
Technical Skill Requirements:
Expert-level SQL and database management
Proficiency in Python, with strong data processing capabilities
Experience in data warehousing and ETL processes
Advanced knowledge of data modeling techniques
Understanding of machine learning data preparation techniques
Experience integrating with BigQuery - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545864
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/02/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Using the newest technologies, we're working on solving a huge problem all enterprises face today; to govern the accessibility of all its employees to all 3rd party vendors (GitHub, SendGrid, Atlassian, and thousands more!), and to make sure there is no leftover/unwanted access to any of the organization's SaaS and AI assets. The SaaS and AI Security field is complex and challenging. Therefore, we're looking for super-talented people, who are not afraid of technical challenges and breaking down barriers to achieve good solutions.
The job
As a Senior Data Engineer, you will have a leading role in developing our data platform, creating and extending our data infrastructure to allow research, development and BI across the company. You will work with multiple stakeholders (architects, analysts, data scientists, and more) to help enrich our data with additional insights.
Responsibilities
Implement robust cloud-based data infrastructure and pipelines as part of our Data Lakehouse infrastructure.
Collaborate with analysts, data scientists and other stakeholders to develop new products and features.
Ensure data integrity by extending our monitoring infrastructure.
Take part in challenging data migration and remodeling efforts.
Contribute to a culture of learning and knowledge-sharing within the team.
Requirements:
5+ years of hands-on experience in building scalable data infrastructure, in particular data lake or data warehouse architectures.
Extensive experience with cloud-based data platforms like AWS, Azure, or GCP, and tools such as Spark, Kafka, Athena, Airflow, DBT, AWS Glue.
Strong programming skills in Python and significant SQL experience.
Independent learner with a "can do" attitude and a strong sense of ownership.
Experience with containerization technologies - advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8545311
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/02/2026
Job Type: Full Time
Welcome to Chargeflow Chargeflow is at the forefront of fintech + AI innovation, backed by leading venture capital firms. Our mission is to build a fraud-free global commerce ecosystem by leveraging the newest technology, freeing online businesses to focus on their core ideas and growth. We are building the future, and we need you to help shape it. Who We're Looking For - The Dream Maker We are seeking an experienced Senior Data Platform Engineer to design and scale the robust, cost-efficient infrastructure powering our groundbreaking fraud prevention solution. In this role, you will architect distributed systems and cloud-native technologies to safeguard our clients' revenue while driving technical initiatives that align with business objectives and operational efficiency. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena Infrastructure & FinOps: Design scalable, robust backend services while owning cloud cost management to ensure maximum resource efficiency. High-Performance Engineering: Architect distributed systems and real-time pipelines capable of processing millions of daily transactions. Operational Excellence: Champion Infrastructure-as-Code (IaC), security, and observability best practices across the R&D organization. Leadership: Lead technical initiatives, mentor engineers, and drive cross-functional collaboration to solve complex infrastructure challenges.
Requirements:
What It Takes - Must haves: Experience: 5+ years of experience in data platform engineering, backend engineering, or infrastructure engineering. Language Proficiency: Specific, strong proficiency in Python & software engineering principles. Cloud Native: Extensive experience with AWS, GCP, or Azure and cloud-native architectures. Databases: Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases, including performance optimization, cost tuning, and scaling strategies. Architecture: Strong experience designing and implementing RESTful APIs, microservices architecture, and event-driven systems. Containerization & IaC: Experience with containerization technologies (Docker, Kubernetes) and Infrastructure-as-Code (e.g., Terraform, CloudFormation). System Design: Strong understanding of distributed systems principles, concurrency, and scalability patterns. Nice-to-Haves Strong Advantage: Apache Iceberg (Lakehouse/S3/Glue), Apache Spark (Optimization), Message Queues (Kafka/Kinesis), Graph Databases (Experience with schema design, cluster setup, and ongoing management of engines like Amazon Neptune or Neo4j). Tech Stack: Orchestration (Temporal/Dagster/Airflow), Modern Data Stack (dbt/DuckDB), Streaming (Flink/Kafka Streams), Observability (Datadog/Grafana). Skills: FinOps (Cost Explorer/Spot instances), CI/CD & DevOps, Data Governance (GDPR), Pydantic, and Mentorship/Leadership experience. Our Story Chargeflow is a leading force in fintech innovation, tackling the pervasive issue of chargeback fraud that undermines online businesses. Born from a deep passion for technology and a commitment to excel in eCommerce and fintech, we've developed an AI-driven solution aimed at combating the frustrations of credit card disputes. Our diverse expertise in fintech, eCommerce, and technology positions us as a beacon for merchants facing unjust chargebacks, supported by a unique success-based approach. Backed by $49M led by Viola Growth, OpenView, Sequoia Capital and other top tier global investors, Chargeflow has embarked on a product-led growth journey. Today, we represent a tight-knit community of passionate individuals and entrepreneurs, united in our mission to revolutionize eCommerce and fight against chargeback fraud, marking us as pioneers in protecting online business revenues.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8476565
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו