משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
להשיב נכון: "ספר לי על עצמך"
שימו בכיס וצאו לראיון: התשובה המושלמת לשאלה שמצ...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
Location: Herzliya
Job Type: Full Time
We are now looking for a Data Engineer to join our team and play a key role in building and optimizing large-scale Big Data systems in production environments.

Key Responsibilities:
Design, implement, and maintain Big Data pipelines in production.
Work extensively with Apache Spark (2.x and above), focusing on complex joins, shuffle optimization, and performance improvements at scale.
Integrate Spark with relational databases, NoSQL systems, cloud storage, and streaming platforms.
Contribute to system architecture and ensure scalability, reliability, and efficiency in data processing workflows.
Requirements:
Proven hands-on experience as a Data Engineer in production Big Data environments.
Hands-on experience in Python development is required.
Expertise in Apache Spark, including advanced performance optimization and troubleshooting.
Practical experience with complex joins, shuffle optimization, and large-scale performance improvements.
Familiarity with relational and NoSQL databases, cloud data storage, and streaming platforms.
Strong understanding of distributed computing principles and Big Data architecture patterns.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8466276
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
Job Type: Full Time
We're looking for a Senior Data Scientist to join a group that specializes in AI Security and Safety, within Networking Cybersecurity. As a Senior Data Scientist in Networking Cybersecurity youll have the opportunity to take an active part in the research and development of our companys world-class networking and data center products. This role involves creative problem solving alongside security and engineering teams, and is key for the continued success of AI networking security.
What youll be doing:
Leading e2e development of AI solutions for security and safety, including implementing and improving agents, models and algorithms.
Collaborating closely with software and hardware engineers on new and diverse AI-driven solutions that include deep learning, vision, LLMs, VLMs, agents, time series and classic ML.
Optimizing and fine-tuning agents and models for performance, scalability, and resource utilization, considering factors such as latency, efficiency, and cost.
Measuring and benchmarking performance to drive improvements.
Creating efficient flywheels for feedback and improvement.
Participating in developing and reviewing code, design documents, use case reviews, and test plan reviews.
Requirements:
MS/PhD with expertise in Computer Science, Computer Engineering, Electrical Engineering or related field with a focus on Deep Learning or Machine Learning.
4+ years of experience in deep learning and machine learning in a production environment. Experience with developing agents and agentic workflows.
Excellent programming skills in Python with software design fundamentals.
Hands-on experience with deep learning development frameworks and libraries (e.g. TensorFlow, PyTorch).
Experience with large scale production systems and pipelines, with a track record of developing production-grade models
Strong algorithm development experience, with knowledge of inference optimization techniques such as model fine tuning, distillation, quantization, pruning.
Background with algorithms including zero/few-shot learning, self-supervised and unsupervised learning, synthetic data creation.
Experience with VLMs, LLMs, agents, RAG and MCP.
You are proactive, take full ownership of your deliverables, have a can-do approach, and are excited to learn, explore and apply your skills and creativity to some of the most challenging and rewarding problems in the field.
Ways stand out from the crowd:
Strong software development experience
Familiarity with GPU based technologies like CUDA, CuDNN and TensorRT.
Experience with tools for data processing and storage (e.g. Apache Spark, Hadoop, SQL databases, NoSQL databases).
Security and networking background, with knowledge of security protocols, network architectures, firewalls, intrusion detection systems, and other relevant security and networking concepts.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8465402
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/12/2025
Location: More than one
Job Type: Full Time
We are looking for an expert Data Engineer to build and evolve the data backbone for our R&D telemetry and performance analytics ecosystem. Responsibilities include processing raw, large quantities of data from live systems at the cluster level: hardware, communication units, software, and efficiency indicators. Youll be part of a fast paced R&D organization, where system behavior, schemas, and requirements evolve constantly. Your mission is to develop flexible, reliable, and scalable data handling pipelines that can adapt to rapid change and deliver clean, trusted data for engineers and researchers.
What youll be doing:
Build flexible data ingestion and transformation frameworks that can easily handle evolving schemas and changing data contracts
Develop and maintain ETL/ELT workflows for refining, enriching, and classifying raw data into analytics-ready form
Collaborate with R&D, hardware, DevOps, ML engineers, data scientists and performance analysts to ensure accurate data collection from embedded systems, firmware, and performance tools
Automate schema detection, versioning, and validation to ensure smooth evolution of data structures over time
Maintain data quality and reliability standards, including tagging, metadata management, and lineage tracking
Enable self-service analytics by providing curated datasets, APIs, and Databricks notebooks.
Requirements:
B.Sc. or M.Sc. in Computer Science, Computer Engineering, or a related field
5+ years of experience in data engineering, ideally in telemetry, streaming, or performance analytics domains
Confirmed experience with Databricks and Apache Spark (PySpark or Scala)
Understanding of streaming processes and their applications (e.g., Apache Kafka for ingestion, schema registry, event processing)
Proficiency in Python and SQL for data transformation and automation
Shown knowledge in schema evolution, data versioning, and data validation frameworks (e.g., Delta Lake, Great Expectations, Iceberg, or similar)
Experience working with cloud platforms (AWS, GCP, or Azure) AWS preferred
Familiarity with data orchestration tools (Airflow, Prefect, or Dagster)
Experience handling time-series, telemetry, or real-time data from distributed systems
Ways to stand out from the crowd:
Exposure to hardware, firmware, or embedded telemetry environments
Knowledge of real-time analytics frameworks (Spark Structured Streaming, Flink, Kafka Streams)
Understanding of system performance metrics (latency, throughput, resource utilization)
Experience with data cataloging or governance tools (DataHub, Collibra, Alation)
Familiarity with CI/CD for data pipelines and infrastructure-as-code practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8465345
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
19/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a brilliant Marketing data Analyst to join our data department. As part of this position, you will be working alongside various marketing teams that are responsible for acquisition of new users. You'll use your analytical, technical and business expertise to track and measure marketing efforts, uncover trends, spot opportunities, and support key decision-making processes. You will be required to answer complex questions defined by executive management and various business units, develop insightful recommendations and initiate deep-dive analyses. Were looking for someone with a passion for numbers and marketing domain, a proactive mindset who brings creative ideas to solve tough challenges and spot new opportunities.You will leverage AI tools, automation, and advanced analytics to streamline workflows, uncover insights, and support data -driven marketing decisions.
What am I going to do:
* Conduct deep-dive analysis to answer complex business questions and deliver comprehensive recommendations.
* Build and maintain the teams reporting set and dashboards.
* Develop marketing measurement methodologies and data science models.
* Provide ROI-based recommendations to optimize marketing spend across channels.
* Identify opportunities for automation and smarter ways of working through data and AI tools.
* Manage multiple tasks in a fast-paced working environment.
* Collaborate closely with stakeholders across the marketing department.
Requirements:
* At least 5 years of experience as a data analyst (Online industry advantage), out of which at least 2-3 years of experience as a Marketing data analyst.
* Extensive knowledge in the marketing domain.
* Vast knowledge of SQL - Must.
* BA/B.Sc. in industrial/information systems engineering, Computer Science, statistics or equivalent
* Experience with reporting platforms - Must (Tableau advantage)
* Experience in working with Big Data tools (BigQuery advantage)
* Exceptional analytical and problem-solving skills with a proactive and creative mindset.
* Experience leveraging AI tools, automation, or advanced analytics in marketing workflows is a strong advantage.
* Exceptional analytical and problem-solving skills with a proactive and creative mindset.
* Proven ability to work independently, take initiative, and learn on your own.
* Tech-savvy and curious, always looking for ways to improve processes and uncover insights.
* Ability to work effectively both independently and as part of a collaborative team.
* Strong communication skills to present findings effectively to both technical and non-technical audiences
At our cpmpany , were not about checklists. If you dont meet 100% of the requirements for this role but still feel passionate about the position and think you have the right skills and qualifications to excel at it, we want to hear from you.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8398021
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
19/12/2025
Job Type: Full Time
Welcome to Chargeflow Chargeflow is at the forefront of fintech + AI innovation, backed by leading venture capital firms. Our mission is to build a fraud-free global commerce ecosystem by leveraging the newest technology, freeing online businesses to focus on their core ideas and growth. We are building the future, and we need you to help shape it. Who We're Looking For - The Dream Maker We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena
* Design, develop, and maintain scalable, robust data pipelines and ETL processes
* Architect and implement complex data models across various storage solutions
* Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
* Ensure data quality, consistency, security, and compliance across all data systems
* Play a key role in defining and implementing data strategies that drive business value
* Contribute to the continuous improvement of our data architecture and processes
* Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
* Participate in and sometimes lead code reviews to maintain high coding standards
* Troubleshoot and resolve complex data-related issues in production environments
* Evaluate and recommend new technologies and methodologies to improve our data infrastructure
Requirements:
What It Takes - Must haves: 5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
* Extensive experience with GraphDB - MUST
* Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
* Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
* Designing and implementing data warehouses and data lakes - Must
* Strong understanding of data modeling techniques - Must
* Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
* Experience with data validation tools such as Pydantic & Great Expectations - Must
* Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must Nice-to-Haves
* Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
* Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
* Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
* Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
* Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
* Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
* Understanding of compliance requirements (e.g., GDPR, CCPA) - A
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8397445
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
18/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
we are seeking a senior NLP Data Scientist to lead our marketing language predictive modeling. Youll explore complex datasets, build and fine-tune advanced NLP/LLM models, and partner closely with engineering and product to integrate your work into AI content platform.
Responsibilities:
Define and own the technical strategy and roadmap for marketing language predictive modeling
Analyze, process, and derive insights from our ever-growing data resources
Solve challenging prediction problems by applying the latest ML/NLP techniques to build and fine-tune highly performant LLMs
Collaborate with engineers and product managers to ship models end-to-end, from research to production
Monitor and iterate on models using offline and online metrics
Requirements:
MSc or PhD in Computer Science or a related field
5+ years of hands-on experience developing NLP/LLM algorithms and systems
Strong proficiency in Python and core data/ML libraries, such as NumPy, pandas, scikit-learn, PyTorch and/or TensorFlow.
Comfortable working in a fast-paced, dynamic work environment
Excellent written and verbal communication skills in English
Previous experience in AdTech - advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8464154
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a talented and motivated Senior Data Scientist to join our AI team. In this role, you will lead the development of our best-in-class conversational heart health agent. You will work on highly complicated challenges, spanning from deep research to full-scale production. Your work will leverage cutting-edge Generative AI and agentic architectures to help solving one of the world's most significant and urging clinical issues: cardiovascular disease. Your contribution will directly impact healthcare practices and enhance the well-being of individuals globally.
Responsibilities:
You will take ownership of designing and building a conversational heart health agent, utilizing advanced LLM techniques, RAG (Retrieval-Augmented Generation), and flow engineering.
You will architect and implement complex agentic and multi-agentic graphs using tools like LangChain and LangGraph to handle intricate clinical logic and user interactions.
You will not just research; you will write production-grade code. You will be responsible for optimizing, deploying, and maintaining these systems in a live environment.
You will implement robust monitoring and debugging workflows for agentic products (using tools like LangFuse or LangSmith) to ensure safety, accuracy, and performance.
You will work closely with product managers and clinical experts to translate clinical research into digital insights and conversational flows that will be used on a daily basis.
You will innovate and make new ideas happen fast, in an agile way, keeping up with the rapidly evolving landscape of GenAI.
Requirements:
5+ years of hands-on experience in developing machine learning and statistical models using Python (Required).
MSc in Computer Science, Electrical Engineering, Statistics, Applied Math or other related fields (Required).
Deep experience with Generative AI, specifically LLMs, Prompt Engineering, RAG, and flow engineering (Required).
Proven experience building agentic and multi-agentic graphs using LangChain or LangGraph (Required).
Experience with monitoring and debugging agentic products in production using tools such as LangFuse or LangSmith (Required).
Strong production experience with the proven ability to transfer research ideas into a scalable, production-grade system.
Experience working with PyTorch / TensorFlow and other standard DL tools (Required).
Expertise in data mining algorithms and statistical modeling techniques such as clustering, classification, regression, decision trees, neural nets, support vector machines, genetic algorithms, and anomaly detection.
Comfortable working in a dynamic group with several ongoing concurrent projects, both in the research phase and production phase.
Advantages:
PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or other related fields.
Experience in the healthcare industry and working with clinical data / EMR.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8463169
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a visionary Director of Algorithm to join our leadership team and report to the VP of R&D. This is a high-impact strategic role for a leader who combines deep expertise in machine learning and advanced AI with a proven track record of turning innovation into measurable business results.

As Director of AI, you will lead and expand a group of algorithm engineers embedded in Product pods, play a pivotal role in roadmap planning, and balance cutting-edge research with practical productization. You will drive cross-organizational initiatives that elevate our products and operations, defend our competitive moat, and unlock new opportunities for growth and innovation.

Key Responsibilities

Team & Talent Leadership

Manage, mentor, and inspire a team of algorithm engineers, fostering technical excellence and career growth.
Scale the AI team through hiring, skill development, and cultivating a strong data science culture.
Promote cross-functional collaboration, execution, knowledge sharing, and experimentation.
Attract senior AI talent to build strong teams.
AI Roadmap & Innovation

Execute the AI roadmap, aligned with the company strategy and product vision.
Introduce state-of-the-art techniques to solve complex challenges.
Drive initiatives that push the boundaries of whats possible, turning AI into a measurable business differentiator.
Technical Leadership

Personally led the resolution of the most complex AI/ML challenges.
Architect and guide the implementation of production-ready ML systems, ensuring robustness, accuracy, and low latency.
Maintain a strong applied research perspective while delivering practical, business-impacting solutions.

Defending & Expanding Our Moat

Keep us ahead of industry shifts by monitoring emerging technologies and evaluating competitive threats.
Build defensible AI assets - unique data pipelines, proprietary models, domain-specific enhancements - that are hard to replicate.
Shape intellectual property (publications, patents, methodologies) that strengthens our positioning.
Requirements:
10+ years of professional experience in AI/ML and data science with at least 5 years in leadership roles.
Advanced degree (MSc/PhD) in Computer Science, Mathematics, or related field.
Proven success leading data science / AI teams in production environments.
Strong grounding in ML/DL, model architectures, and data pipelines.
Track record of translating research into scalable production systems with measurable business impact.
Practical trade-off mindset - build vs. buy, open-source vs. proprietary, etc.
Excellent communication skills - capable of influencing executives, mentoring engineers, and educating non-technical stakeholders.
Passion for continuous learning, innovation, and knowledge sharing.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462996
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
18/12/2025
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Design, develop, and maintain scalable, robust data pipelines and ETL processes
Architect and implement complex data models across various storage solutions
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
Ensure data quality, consistency, security, and compliance across all data systems
Play a key role in defining and implementing data strategies that drive business value
Contribute to the continuous improvement of our data architecture and processes
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
Participate in and sometimes lead code reviews to maintain high coding standards
Troubleshoot and resolve complex data-related issues in production environments
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
What It Takes - Must haves::
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
Experience with data validation tools such as Pydantic & Great Expectations - Must
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must
Nice-to-Haves:
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams
Excellent communication skills with the ability to explain complex technical concepts to various audiences
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462760
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Design, implement, and maintain robust data pipelines and ETL/ELT processes on GCP (BigQuery, Dataflow, Pub/Sub, etc.).
Build, orchestrate, and monitor workflows using Apache Airflow / Cloud Composer.
Develop scalable data models to support analytics, reporting, and operational workloads.
Apply software engineering best practices to data engineering: modular design, code reuse, testing, and version control.
Manage GCP resources (BigQuery reservations, Cloud Composer/Airflow DAGs, Cloud Storage, Dataplex, IAM).
Optimize data storage, query performance, and cost through partitioning, clustering, caching, and monitoring.
Collaborate with DevOps/DataOps to ensure data infrastructure is secure, reliable, and compliant.
Partner with analysts and data scientists to understand requirements and translate them into efficient data solutions.
Mentor junior engineers, provide code reviews, and promote engineering best practices.
Act as a subject matter expert for GCP data engineering tools and services.
Define and enforce standards for metadata, cataloging, and data documentation.
Implement monitoring and alerting for pipeline health, data freshness, and data quality.
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or related field.
6+ years of professional experience in data engineering or similar roles, with 3+ years of hands-on work in a cloud env, preferably on GCP.
Strong proficiency with BigQuery, Dataflow (Apache Beam), Pub/Sub, and Cloud Composer (Airflow).
Expert-level Python development skills, including object-oriented programming (OOP), testing, and code optimization.
Strong data modeling skills (dimensional modeling, star/snowflake schemas, normalized/denormalized designs).
Solid SQL expertise and experience with data warehousing concepts.
Familiarity with CI/CD, Terraform/Infrastructure as Code, and modern data observability tools.
Exposure to AI tools and methodologies (i.e, Vertex AI).
Strong problem-solving and analytical skills.
Ability to communicate complex technical concepts to non-technical stakeholders.
Experience working in agile, cross-functional teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462182
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/12/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
Design, implement, and improve ML and AI models to drive business outcomes across multiple domains, such as recommendation systems, image recognition, ChatBot, etc.
Own the end-to-end ML lifecycle: data preprocessing, feature engineering, model training, validation, and deployment.
Expert level in designing evaluation pipelines to prove performance, scalability, and consistency
Leverage generative AI and LLMs to enhance existing workflows and explore new product opportunities.
Collaborate with Product, Engineering, and Analytics teams to align modeling efforts with business needs.
Clearly communicate complex findings and model insights to stakeholders.
Requirements:
BSc or higher in Computer Science, Mathematics, Statistics, or related fields.
4+ years of hands-on experience as a Data Scientist, ideally within mobile, gaming, or social network industries.
Proven experience with AI/ML frameworks and toolkits (Scikit-learn, TensorFlow, PyTorch, LangChain, etc.)
Familiarity with MLOps best practices, model versioning, experiment tracking, and continuous deployment.
Strong knowledge of machine learning techniques: Classification, regression, segmentation, reranking, model interpretability
Solid background in data analysis and statistics; ability to design experiments and interpret results.
Experience working in cloud environments, especially Google Cloud Platform (GCP) BigQuery, GCS, Vertex AI (a plus).
Comfortable working in fast-paced, production-critical environments with a sense of ownership and accountability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8462181
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
This role has been designed as Hybrid with an expectation that you will work on average 2 days per week from an HPE office.
Job Description:
We are looking for a highly skilled Senior Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on. You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutionsincluding the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
What Youll Do
Architecture & Strategy
Own the architecture of our data platform, ensuring scalability, performance, reliability, and security.
Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
Implement data quality, lineage, observability, and automated testing frameworks.
Build ingestion patterns for APIs, event streams, files, and third-party data sources.
Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
Serve as a senior technical leader and mentor within the data engineering team.
Lead architecture reviews, design discussions, and cross-team engineering initiatives.
Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
Requirements:
610+ years of experience in Data Engineering, with demonstrated architectural ownership.
Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
Expert proficiency in SQL and strong Python skills for pipeline development and automation.
Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
Experience implementing data quality, observability, and lineage solutions.
Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
Strong background in DataOps practices: CI/CD, testing, version control, automation.
Proven leadership in driving architectural direction and mentoring engineering teams
Nice to Have
Experience with data governance or metadata management tools.
Hands-on experience with DBT, including modeling, testing, documentation, and advanced features.
Exposure to machine learning pipelines, feature stores, or MLOps.
Experience with Terraform, CloudFormation, or other IaC tools.
Background designing systems for high scale, security, or regulated environments.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8461496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/12/2025
Location: Merkaz
Job Type: Full Time
?? Senior data Scientist
Join a team building ML-driven products that outperform the investment market
Were looking for an experienced Senior data Scientist to join a strong data Science team and make a real impact by designing and deploying advanced Machine Learning solutions in a production environment.
What Youll Do
Design, research, develop, and implement predictive ML models for complex business problems
Serve as a focal point for classical ML algorithms, LLM implementation, and statistical methodologies
Build and maintain scalable data pipelines and ML infrastructure
Collaborate closely with backend, data Engineering, DevOps, frontend teams, and investment portfolio managers
Stay up to date with advancements in MLOps, time-series prediction, and Gen-AI
Requirements
B.Sc. in Machine Learning, Computer Science, Software Engineering, Statistics, or a related field
5+ years of hands-on experience as a data Scientist in a startup environment
Proven experience with time-series prediction and tabular data, including ML model productization
Strong coding skills in Python and SQL (efficient, scalable ML code)
Experience deploying ML models to production (scalability, reliability, Real-Time processing)
Strong communication skills and ability to present actionable insights
Proactive, innovative, and independent mindset
Passion for continuous learning and adaptability
Preferred Qualifications
M.Sc. or Ph.D. in a quantitative field
Experience fine-tuning and deploying LLMs (Hugging Face, vLLM, SageMaker)
Experience with AWS, Airflow, Docker, Kubernetes, MLflow
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8461047
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו