רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
כל מה שרציתם לדעת על מבחני המיון ולא העזתם לשאול
זומנתם למבחני מיון ואין לכם מושג לקראת מה אתם ה...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to join our Data Labs (DL) department, which specializes in professional services for our super-premium customers. This role will report to our DL (Datalabs) Data Science Team Manager in the R&D.
Why is this role so important at our company?
we are a data-focused company, and our unique AI and machine learning capabilities are the center of our business.
As part of this role, you will create and support a complex data model pipeline that helps analyze the petabytes of data we receive from various sources, and research and develop new features and capabilities for our product solutions
As a data engineer in the Datalabs team, you will work on the very core of the company. Part of your role will be to create processes that help turn raw data into usable metrics and leverage AI models and statistical algorithms to support out-of-the-box requests from customers who want custom data labs. The Datalabs departments business-oriented nature also means you will be supporting a team of analysts and data scientists who interact directly with customers. Together with them, you will translate the voice of these customers into best-in-class data labs.
So, what will you be doing all day?
Building and maintaining our big-data pipelines
Take a major part in designing and implementing complex high-scale systems using a large variety of technologies
Be part of a team with smart and motivated engineers, and data scientists, to collaborate on the planning, development, and maintenance of our products
Implement solutions in the AWS cloud environment, and work in Databricks with PySpark.
Requirements:
This is the perfect job for someone who:

Holds a BSc degree in Computer Science or equivalent practical experience.
You love building robust, fault-tolerant, and scalable systems and products
You are a go-getter and a team player with a sense of ownership.
Has at least 3+ years of server-side software development experience in one or more general-purpose programming languages (C#, Go, Python, etc.)
Experience building large-scale web APIs: advantage for working with Microservices architecture, AWS, and databases (Redis, PostgreSQL, Firebolt)
Familiarity with Big Data technologies: A familiarity with Spark, Databricks, and Airflow is a big advantage.
Worked in a cloud environment such as AWS or GCP, and is familiar with its different services.
Familiarity with ML pipelines and applications
Familiarity with LLM tools and frameworks.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8590037
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
our Data & Analytics team believes theres a better way to make data useful than just creating endless dashboards. We focus on in-depth analysis and building scalable, trustworthy data solutions that help every team make faster, smarter decisions. From analytics and business intelligence to data pipelines and predictive models, we turn raw information into real impact. If youre passionate about finding radical new ways to leverage data, youll fit right in.
On your day to day:
On a day-to-day basis, we transform raw data into clean, structured models using tools like SQL, Python, and dbt. We build and maintain modern BI platforms, develop reporting systems, and design AI-driven analytics that surface valuable insights quickly and reliably. Our team is hands-on with building reusable metrics, defining source-of-truth data models, and ensuring consistency through a strong semantic layer. Whether were shipping a new dashboard, debugging a dbt model, or refining how a metric is defined across the business, our focus is always on clarity, scalability, and enabling smarter, faster decisions.
Requirements:
Strong proficiency in SQL with hands-on experience building data pipelines (4+ years)
Experience with modeling data using dbt or similar tools.
Proficiency in Python
Solid grasp of software engineering best practices, including query optimization, version control (e.g. Git), code reviews, and documentation.
Analytical mindset with strong problem-solving skills, the ability to manage multiple priorities independently and a proactive approach to improving data processes and tools.
Comfortable in a fast-paced, cross-functional environment; able to collaborate with teams across Product, BizOps, and Marketing.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8590022
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Backend Engineer to join our MLOps team and help build the infrastructure that powers cutting-edge AI models.
In this role, youll manage the end-to-end MLOps lifecycle, designing event-driven systems that handle massive video data and moving compute-intensive, generative models from research to production.
You'll collaborate closely with AI researchers and video-processing teams to ensure our AI services are scalable, reliable, and performant.
Requirements:
6+ years of production-grade Python development experience.
Strong background in distributed systems: Youve built and debugged complex, event-driven architectures (e.g., Kafka, microservices).
Expertise in Data Engineering at scale: Experience building massive data pipelines and architecting Data Lakes (S3) with compute layers like Athena for large-scale analysis.
Deep understanding of the MLOps lifecycle: Experience taking models from training to deployment, including versioning and performance monitoring.
Experience with containerized environments, microservices, and Kubernetes.
Experience with workflow management frameworks (Temporal, Airflow) and asynchronous programming.
Experience with cloud platforms (AWS preferred) and model-serving frameworks (Triton, VLLM/SGLang, Ray Serve).
A love for exploring new tech and the drive to implement modern frameworks that move the needle.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8589969
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a GTM Engineer.
On your day to day:
Youll be our first GTM Engineer, building the bridge between data, automation, and GTM excellence. Your day will include designing and deploying AI-powered agents, connecting multiple tools like HubSpot, Snowflake, and Gong, and turning fragmented data into automated insights, workflows, and sales intelligence.
Youll create real impact by enabling our reps to move faster, engage smarter, and retain more customers, with the help of scalable tech and smart automation.
Requirements:
4+ years in Business Operations
Hands-on experience integrating tools like HubSpot, Snowflake, Gong, and GenAI platforms (Curser, Claude, OpenAI, Gemini,n8n).
Ability to scope and deploy AI agents (e.g., research bots, churn predictors, onboarding copilots).
Strong technical proficiency.
Strategic mindset with ability to translate GTM processes into clear, automated systems.
Ability to lead AI initiatives and projects within GTM operations
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8589921
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
Were looking for a senior‑level Data Platform Solutions Engineer who is both data‑savvy and customer‑focused. In this role youll help our customers run large‑scale data and AI workloads, tackling technical challenges across modern data environments. Youll blend deep troubleshooting expertise with tool‑building and solution design, while partnering closely with our teams and customers to ensure successful deployment and operation of our data‑platform offerings-both on‑premises and SaaS.
You will:
Assist customers with complex data‑lake and AI workloads on our data platform.
Diagnose and remediate high‑severity, multi‑layer issues spanning infrastructure, Kubernetes, data engines, storage, and networking.
Create and refine tools, automation, and solutions that boost environment stability and operational efficiency.
Deliver architectural guidance and configuration best practices to enhance performance, scalability, and resiliency.
Serve as the technical liaison among Customer Support, Product Management, and R&D, turning customer insights into product enhancements.
Take the first step towards your dream career
Job ID:R285903.
Requirements:
Essential Requirements:
Data & Platform Engineering: 4+ years of experience working with data platforms, developing Python-based tools, automation, or data pipelines for managing complex datasets and workflows.
Databases: Strong hands-on experience with relational databases (SQL) and NoSQL technologies such as MongoDB, including performance tuning.
Cloud-Native & Kubernetes: Experience deploying and operating containerized workloads on Kubernetes in hybrid or multi-cloud environments.
Data Pipelines: Experience building, maintaining, or supporting scalable ETL/ELT pipelines and data processing solutions.
Observability & Troubleshooting: Experience with monitoring and log platforms (ELK, Splunk) and troubleshooting complex production environments, preferably in customer-facing contexts.
Desirable Requirements:
Experience with Starburst or Trino (highly preferred).
Strong verbal and written English communication skills for global collaboration.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588964
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
20/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Staff Software Data Engineer to join our Engineering team and lead the evolution of our next-generation data platform. In this high-impact role, you will operate as a player-coach: you will be the technical visionary responsible for designing the ecosystem, while remaining deeply hands-on to implement scalable, secure, and intelligent solutions that power everything from operational reporting to advanced GenAI applications.

You will bridge the gap between complex business requirements and technical execution, advocating for a data-first culture. This role offers a clear growth path: while it currently starts as an individual contributor position, it has the potential to evolve into a leadership role.

Why join us?

we are the AI-powered platform for finance automation, elevating how finance teams operate in the global economy. We empower our customers to scale faster and smarter by removing the complexities of doing global business and accelerating their finance operations efficiency. Our platform provides a comprehensive suite of finance automation solutions designed for mid-market businesses across accounts payable, global payouts, procurement, employee expenses, corporate cards, supplier management, tax compliance, and treasury. our partners with leading financial institutions such as Citi, Wells Fargo, J.P. Morgan, and Visa, enabling over 5,000 global companies to efficiently and securely pay millions of suppliers and payees across 200+ countries and territories, in 120 currencies.

At our company, we pride ourselves on our collaborative culture, the quality of our product and the capabilities of our people. we are passionate about the work they do, and keen to get the job done. we offer competitive benefits, a flexible workplace, career coaching, and an environment where diverse individuals can thrive and make an impact. Our culture ensures everyone checks their egos at the door and stands ready to reach for success together.

Founded in Israel in 2010, our company is a global business headquartered in the San Francisco Bay Area (Foster City) with offices in Tel Aviv, Plano, Toronto, Vancouver, London, Amsterdam, Tbilisi and Medellin.
About the Role

Architecture & Hands-on Execution: Design and actively build a comprehensive data platform. You will not just oversee infrastructure; you will write the core code and build tools that support diverse workloads-from operational reporting to complex analytical queries.
Strategic & Technical Delivery: Partner with product managers to translate business objectives into technical strategies, then lead the engineering effort to deliver them.
Technology Evaluation: Continuously evaluate, prototype, and select best-in-class technologies to future-proof our data stack.
Technical Leadership & Mentorship: Act as a primary advocate for platform adoption. You will foster a community of practice around data engineering, mentoring senior and mid-level engineers to elevate the team's technical bar.
Governance & Quality: Implement and automate robust frameworks for Data Discovery, Quality, and Governance, ensuring solutions are trustworthy and compliant with financial regulations.
Requirements:
We are looking for a highly motivated Staff Engineer with a strong sense of ownership, eager to tackle technical challenges in a high-throughput data processing environment.

Experience: 8+ years of hands-on experience in Data Engineering and Architecture, with a track record of building and shipping platforms at scale.
Experience with modern big data platforms such as Snowflake, Databricks, or similar technologies.
Hands-on experience with Data infrastructure experience (Orchestration, scalability, reliability, and cloud architecture).
Data Movement & Integration: Deep understanding of data movement strategies, including high-frequency batching, CDC, and real-time event streaming.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585918
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
19/03/2026
Location: More than one
Job Type: Full Time
We are looking for an expert Data Engineer to build and evolve the data backbone for our R&D telemetry and performance analytics ecosystem. Responsibilities include processing raw, large quantities of data from live systems at the cluster level: hardware, communication units, software, and efficiency indicators. Youll be part of a fast paced R&D organization, where system behavior, schemas, and requirements evolve constantly. Your mission is to develop flexible, reliable, and scalable data handling pipelines that can adapt to rapid change and deliver clean, trusted data for engineers and researchers.

What youll be doing:

Build flexible data ingestion and transformation frameworks that can easily handle evolving schemas and changing data contracts.

Develop and maintain ETL/ELT workflows for refining, enriching, and classifying raw data into analytics-ready form.

Collaborate with R&D, hardware, DevOps, ML engineers, data scientists and performance analysts to ensure accurate data collection from embedded systems, firmware, and performance tools.

Automate schema detection, versioning, and validation to ensure smooth evolution of data structures over time.

Maintain data quality and reliability standards, including tagging, metadata management, and lineage tracking.

Enable self-service analytics by providing curated datasets, APIs, and Databricks notebooks.
Requirements:
What we need to see:

B.Sc. or M.Sc. in Computer Science, Computer Engineering, or a related field.

5+ years of experience in data engineering, ideally in telemetry, streaming, or performance analytics domains.

Confirmed experience with Databricks and Apache Spark (PySpark or Scala).

Understanding of streaming processes and their applications (e.g., Apache Kafka for ingestion, schema registry, event processing).

Proficiency in Python and SQL for data transformation and automation.

Shown knowledge in schema evolution, data versioning, and data validation frameworks (e.g., Delta Lake, Great Expectations, Iceberg, or similar).

Experience working with cloud platforms (AWS, GCP, or Azure) - AWS preferred.

Familiarity with data orchestration tools (Airflow, Prefect, or Dagster).

Experience handling time-series, telemetry, or real-time data from distributed systems.

Ways to stand out from the crowd:

Exposure to hardware, firmware, or embedded telemetry environments.

Knowledge of real-time analytics frameworks (Spark Structured Streaming, Flink, Kafka Streams).

Understanding of system performance metrics (latency, throughput, resource utilization).

Experience with data cataloging or governance tools (DataHub, Collibra, Alation).

Familiarity with CI/CD for data pipelines and infrastructure-as-code practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585234
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/03/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
we are a data driven company, the Data Engineering team is the bridge between raw data and impactful business insights. You will lead a team of talented engineers to architect, build, and maintain the data infrastructure of our company.
Your mission is to ensure our data is reliable, secure, and ready for everything from standard BI reporting to cutting-edge Generative AI applications. Youll act as a key partner to our business teams, translating high-level needs into technical reality and driving a massive impact on our bottom line. In addition youll spend time mentoring your team as well as tackling high-level architecture decisions, pushing technical strategy and contributing to the codebase.
What am I going to do?
Lead a team that designs and scales real-time data pipelines processing millions of events hourly.
Utilize cutting-edge technologies to build scalable and high performant data infrastructure.
Influence and drive architecture and data strategy decisions.
Work closely with other stakeholders such as Data Developers, Analysts, Data Science and R&D.
Hands-on development of data infrastructure tools (~50% of the time).
Lead and mentor the team talent.
Finops and governance of data infrastructure domains.
Requirements:
5+ years of hands-on experience in data engineering, with at least 2 years in a team leadership role.
Strong programming skills in Python.
Experience with relevant technologies such as BigQuery, Airflow/Prefect, DBT, Kafka, Athena, BI Tools.
Proven track record of design and implementation of highly scalable distributed data pipelines.
Deep understanding of data modeling, ETL and real-time analytics.
Great 360 collaboration with excellent communication skills.
MSc./BSc. in Computer Sciences from a top university.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8582750
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/03/2026
מיקום המשרה: מרכז
סוג משרה: משרה מלאה
אנחנו מחפשים Data Engineer מנוסה ובעל יכולות אנליטיות חזקות, שיצטרף אלינו ויוביל את בניית תשתיות הדאטה המאפשרות פיתוח, אימון והסקה של מודלים חכמים בקנה מידה גדול.
דרישות:
אנחנו מחפשים Data Engineer מנוסה ובעל יכולות אנליטיות חזקות, שיצטרף אלינו ויוביל את בניית תשתיות הדאטה המאפשרות פיתוח, אימון והסקה של מודלים חכמים בקנה מידה גדול.

התפקיד דורש הבנה עמוקה של התמונה העסקית–מערכתית, ויכולת להוביל תהליכי דאטה מקצה לקצה בסביבה טכנולוגית מתקדמת.
במסגרת התפקיד תהיה אחריות מלאה על הקמת שכבות דאטה, יצירת פאנלים לאימון ולהסקה, פיתוח תהליכי ETL/ELT, וחיבור למקורות מידע ארגוניים שונים תוך הבנה מעמיקה של הצרכים המודליים והעסקיים. תחומי אחריות/מה התפקיד כולל-
בניית תשתיות דאטה ופאנלים למודלי ML, כולל שכבות נתונים לאימון ולהסקה.
יצירת פאנלים דינמיים בזמן אמת וביצוע העשרת פיצ’רים ויצירת פיצ’רים מותאמים למודלים.
עבודה עם מקורות מידע ארגוניים ואיסוף נתונים יעיל ממערכות שונות.
פיתוח ושיפור תהליכי ETL/ELT (ב-Snowflake ו-DBT) תוך ניטור ואבטחת איכות הדאטה.
שיתוף פעולה עם Data Scientists, אנליסטים וצוותי MLOps לפתרונות דאטה חוצי ארגון. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8582080
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו