רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
כל מה שרציתם לדעת על מבחני המיון ולא העזתם לשאול
זומנתם למבחני מיון ואין לכם מושג לקראת מה אתם ה...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8430193
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Backend Data Engineer
As part of the role you will have the opportunity to:
Build end-to-end development of data infrastructure features, scalable data processing, database interaction and integration with CI/CD.
Take part in expanding our core data platform solutions, build new pipelines from scratch that ingest and process data at scale.
Work across a rich stack of technologies from Apache Kafka, Apache Storm, NoSQL, and relational databases.
Analyze and optimize performance, scalability, and stability of our product environments.
Work closely with the data-science team to implement production grade pipelines based on AI research.
Requirements:
4+ years of experience as a Data Engineer with backend development
Proficiency in Java and Spring - Must
Hands on experience with developing and maintaining a distributed data processing pipelines such as: Apache Storm, Kafka, Spark or Airflow
Familiarity with design principles such as Data Modelling, Distributed Processing, Streaming vs. Batch processing
Proven experience in leading design and system architecture of complex features
Experienced in database optimization tasks such as: sharding, rollup, optimal indexes etc.
Familiarity with cloud platforms
Willing to work in a fast, high growth start-up environment and be able to switch between devops/programming/debugging tasks
Self-management skills and ability to work well both independently and as part of a team, sense of ownership and of urgency
Good communication skills in English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8427431
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/11/2025
מיקום המשרה: רמת השרון
סוג משרה: משרה מלאה
לתפקיד טכנולוגי בארגון בטחוני מחפשים מהנדס/ת data Engineer מנוסה להשתלב כיועץ/ת באחד הצוותים שלנו כמפתח/ת בכיר/ה.
התפקיד כולל פיתוח מסלולי מידע ו,APIs הכוונה והובלה מקצועית של תהליכי פיתוח.
משרה מלאה, בגלילות.
קורות חיים לשלוח למייל.
דרישות:
ניסיון של 3 שנים בפיתוח, עם פוקוס ב data ומסלולים ובניית ארכיטקטורה scalable - חובה
ניסיון ב- # C / Python - חובה
היכרות טובה עם SQL (עדיפות ל OracleSQL) - חובה
ניסיון בKafka או מנהל תורים מקביל אחר חובה
ידע ב Kubernetes חובה. המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8425332
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and visionary Data Platform Engineer to help design, build and scale our BI platform from the ground up.

In this role, you will be responsible for building the foundations of our data analytics platform enabling scalable data pipelines and robust data modeling to support real-time and batch analytics, ML models and business insights that serve both business intelligence and product needs.

You will be part of the R&D team, collaborating closely with engineers, analysts, and product managers to deliver a modern data architecture that supports internal dashboards and future-facing operational analytics.

If you enjoy architecting from scratch, turning raw data into powerful insights, and owning the full data lifecycle this role is for you!

Responsibilities
Take full ownership of the design and implementation of a scalable and efficient BI data infrastructure, ensuring high performance, reliability and security.

Lead the design and architecture of the data platform from integration to transformation, modeling, storage, and access.

Build and maintain ETL/ELT pipelines, batch and real-time, to support analytics, reporting, and product integrations.

Establish and enforce best practices for data quality, lineage, observability, and governance to ensure accuracy and consistency.

Integrate modern tools and frameworks such as Airflow, dbt, Databricks, Power BI, and streaming platforms.

Collaborate cross-functionally with product, engineering, and analytics teams to translate business needs into data infrastructure.

Promote a data-driven culture be an advocate for data-driven decision-making across the company by empowering stakeholders with reliable and self-service data access.
Requirements:
5+ years of hands-on experience in data engineering and in building data products for analytics and business intelligence.

Proven track record of designing and implementing large-scale data platforms or ETL architectures from the ground up.

Strong hands-on experience with ETL tools and data Warehouse/Lakehouse products (Airflow, Airbyte, dbt, Databricks)

Experience supporting both batch pipelines and real-time streaming architectures (e.g., Kafka, Spark Streaming).

Proficiency in Python, SQL, and cloud data engineering environments (AWS, Azure, or GCP).

Familiarity with data visualization tools like Power BI, Looker, or similar.

BSc in Computer Science or a related field from a leading university
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8423261
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
We are seeking a seasoned and strategic Head of BI to lead our Business Intelligence group.
In this pivotal role, you will be responsible for overseeing the full BI stackfrom data infrastructure and pipelines to analytics and reportingwhile managing a team of data engineers and analysts.
You will drive the design and execution of our data strategy, ensure delivery of trusted insights across the organization, and lead the team responsible for building and maintaining our data platform and analytics capabilities. This is a hands-on leadership role requiring a balance of technical depth, managerial experience, and strategic thinking.Responsibilities
What Youll Be Doing:
Lead, mentor, and scale a cross-functional team of BI Analysts and Data Engineers.
Own and evolve our data warehouse architecture, ensuring integrity, scalability, and performance (Snowflake preferred).
Oversee the design and development of ETL/ELT pipelines and semantic data models using tools such as dbt, Python, and Airflow.
Collaborate with senior stakeholders to align business goals with data priorities, translating requirements into scalable solutions.
Define and implement best practices across the entire BI lifecycle: data ingestion, transformation, modeling, visualization, and governance.
Deliver robust, actionable dashboards and insights that drive key business decisions and KPIs.
Requirements:
8+ years of experience in Business Intelligence, Data Engineering, or Analytics roles, including 4+ years in a leadership capacity.
Proven ability to manage hybrid teams of engineers and analysts with a track record of high-impact project delivery.
Deep technical expertise in SQL, dbt, Python, and modern data platforms (Snowflake, BigQuery, Redshift).
Hands-on experience building data pipelines, warehouse architecture, and analytical models at scale.
Strong stakeholder engagement skills with the ability to translate business needs into data products.
Effective communicator with a strong sense of ownership, clarity, and strategic vision.
BA/BSc in Computer Science, Engineering, Industrial Engineering, or a related field
Nice to have:
Experience with BI tools like Looker, Power BI, Tableau, or Metabase.
Exposure to AI/ML-driven analytics or predictive modeling workflows.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8423240
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced Data Engineer to join our DataWarehouse team in TLV.

In this role, you will play a pivotal role in the Data Platform organization, leading the design, development, and maintenance of our data warehouse. In your day-to-day, youll work on data models and Backend BI solutions that empower stakeholders across the company and contribute to informed decision-making processes all while leveraging your extensive experience in business intelligence.

This is an excellent opportunity to be part of establishing state-of-the-art data stack, implementing cutting-edge technologies in a cloud environment.

In this role youll
Lead the design and development of scalable and efficient data warehouse and BI solutions that align with organizational goals and requirements

Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs

Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into the semantic layer

Develop processes to enforce schema evaluation, cover anomaly detection, and monitor data completeness and freshness

Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency

Implement best practices for data warehouse and database performance tuning

Conduct thorough testing of data applications and implement robust validation processes

Collaborate with Data Infra Engineers, Developers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers
Requirements:
3+ years of experience as a BI Engineer or Data Engineer

Proficiency in data modeling, ELT development, and DWH methodologies

SQL expertise and experience working with Snowflake or similar technologies

Prior experience working with DBT

Experience with Python and software development, an advantage

Excellent communication and collaboration skills

Ability to work in an office environment a minimum of 3 days a week
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8421158
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure group, youll help build Lemonades data platform for our growing stack of products, customers, and microservices.

We ingest our data from our operational DBs, telematics devices, and more, working with several data types (both structured and unstructured). Our challenge is to provide building tools and infrastructure to empower other teams leveraging data-mesh concepts.

In this role youll:
Help build Lemonades data platform, designing and implementing data solutions for all application requirements in a distributed microservices environment

Build data-platform ingestion layers using streaming ETLs and Change Data Capture

Implement pipelines and scheduling infrastructures

Ensure compliance, data-quality monitoring, and data governance on data platform

Implement large-scale batch and streaming pipelines with data processing frameworks

Collaborate with other Data Engineers, Developers, BI Engineers, ML Engineers, Data Scientists, Analysts and Product managers

Share knowledge with other team members and promote engineering standards
Requirements:
5+ years of prior experience as a data engineer or data infra engineer

B.S. in Computer Science or equivalent field of study

Knowledge of databases (SQL, NoSQL)

Proven success in building large-scale data infrastructures such as Change Data Capture, and leveraging open source solutions such as Airflow & DBT, building large-scale streaming pipelines, and building customer data platforms

Experience with Python, Pulumi\Terraform, Apache Spark, Snowflake, AWS, K8s, Kafka

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420906
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of Lemonades data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives Lemonade toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions

Collaborate closely with other Staff Engineers across Lemonade to align on cross-organizational initiatives and technical strategies

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas

A B.Sc. in Computer Science or a related technical field (or equivalent experience)

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420751
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were seeking our first Data Engineer to join the Revenue Operations team. This is a high-impact role where youll build the foundations of our data infrastructure connecting the dots between systems, designing and maintaining our data warehouse, and creating reliable pipelines that bring together all revenue-related data. Youll work directly with the Director of Revenue Operations and partner closely with Sales, Finance, and Customer Success.
This is a chance to shape the role from the ground up and create a scalable data backbone that powers smarter decisions across the company.
Role Overview
As the Data Engineer, you will own the design, implementation, and evolution of our data infrastructure. Youll connect core business systems (CRM, finance platforms, billing systems,) into a central warehouse, ensure data quality, and make insights accessible to leadership and revenue teams. Your success will be measured by the accuracy, reliability, and usability of the data foundation you build.
Key Responsibilities
Data Infrastructure & Warehousing
Design, build, and maintain a scalable data warehouse for revenue-related data.
Build ETL/ELT pipelines that integrate data from HubSpot, Netsuite, billing platforms, ACP, and other business tools.
Develop a clear data schema and documentation that can scale as we grow.
Cross-Functional Collaboration
Work closely with Sales, Finance, and Customer Success to understand their reporting and forecasting needs.
Translate business requirements into data models that support dashboards, forecasting, and customer health metrics.
Act as the go-to partner for data-related questions across revenue teams.
Scalability & Optimization
Continuously monitor and optimize pipeline performance and warehouse scalability.
Ensure the infrastructure can handle increased data volume and complexity as the company grows.
Establish and enforce best practices for data quality, accuracy, and security.
Evaluate and implement new tools, frameworks, or architectures that improve automation, speed, and reliability.
Build reusable data models and modular pipelines to shorten development time and reduce maintenance.
Requirements:
46 years of experience as a Data Engineer or in a similar role (preferably in SaaS, Fintech, or fast-growing B2B companies).
Strong expertise in SQL and data modeling; comfort working with large datasets.
Hands-on experience building and maintaining ETL/ELT pipelines (using tools such as Fivetran, dbt, Airflow, or similar).
Experience designing and managing cloud-based data warehouses (Snowflake, BigQuery, Redshift, or similar).
Familiarity with CRM (HubSpot), ERP/finance systems (Netsuite), and billing platforms.
Strong understanding of revenue operations metrics (ARR, MRR, churn, LTV, CAC, etc.).
Ability to translate messy business requirements into clean, reliable data structures.
Solid communication skills able to explain technical concepts to non-technical stakeholders.
What Sets You Apart
Youve been the first data hire before and know how to build from scratch (not a must).
Strong business acumen with a focus on revenue operations.
A builder mindset: you like solving messy data problems and making systems talk.
Comfortable working across teams and translating business needs into data solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8419332
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at our company, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide. In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across our company to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community. You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining our company, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond. Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems. Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities. Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Solve challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Improve logging
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
Influence product and cross-functional teams to identify data opportunities to drive impact.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
2+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
2+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8419273
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Job Type: Full Time
Required Data Infrastructure Engineer
What Youll Do:
Design, implement, and enhance robust and scalable infrastructure that enables efficient deployment, monitoring, and management of machine learning models in production. In this role, you will bridge the gap between research and production environments, streamline data and feature pipelines, optimize model serving, and ensure governance and reproducibility across our ML lifecycle.
Responsibilities:
Decouple data prep from model training to accelerate experimentation and deployment
Build efficient data workflows with versioning, lineage, and optimized resource use (e.g., Snowflake, Dask, Airflow)
Develop reproducible training pipelines with MLflow, supporting GPU and distributed training
Automate and standardize model deployment with pre-deployment testing (E2E, dark mode)
Maintain a model repository with traceability, governance, and consistent metadata
Monitor model performance, detect drift, and trigger alerts across the ML lifecycle
Enable model comparison with A/B testing and continuous validation
Support infrastructure for deploying LLMs, embeddings, and advanced ML use cases
Manage a unified feature store with history, drift detection, and centralized feature/label tracking
Establish a single source of truth for features across research and production across research and production.
Requirements:
3+ years of experience as an MLOps, ML Infrastructure, or Software Engineer in ML-driven environments, preferably with PyTorch.
Strong proficiency in Python, SQL (leveraging platforms like Snowflake and RDS), and distributed computing frameworks (e.g., Dask, Spark) for processing large-scale data in formats like Parquet.
Hands-on experience with feature stores, key-value stores like Redis, MLflow (or similar tools), Kubernetes, Docker, cloud infrastructure (AWS, specifically S3 and EC2), and orchestration tools (Airflow).
Proven ability to build and maintain scalable and version-controlled data pipelines, including real-time streaming with tools like Kafka.
Experience in designing and deploying robust ML serving infrastructures with CI/CD automation.
Familiarity with monitoring tools and practices for ML systems, including drift detection and model performance evaluation.
Nice to Have
Experience with GPU optimization frameworks and distributed training.
Familiarity with advanced ML deployments, including NLP and embedding models.
Knowledge of data versioning tools (e.g., DVC) and infrastructure-as-code practices.
Prior experience implementing structured A/B testing or dark mode deployments for ML models.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8419058
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at our company, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide. In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across our company to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community. You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining our company, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond. Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems. Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities. Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
Influence product and cross-functional teams to identify data opportunities to drive impact
Mentor team members by giving/receiving actionable feedback
דרישות:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
4+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.).#E המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8418884
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
משרה בלעדית
14/11/2025
מיקום המשרה: מספר מקומות
סוג משרה: משרה מלאה
חברת סייבר במרכז הארץ מגייסת data Engineer
במסגרת התפקיד: עבודה בצוות, הובלת פרויקטי פיתוח של שירותי data, תמיכה בלקוחות בכמויות גדולות של נתונים, עבודה בטכנולוגיות AI מתקדמות ועוד.
דרישות:
- 4 שנות ניסיון כ- data Engineer
- ניסיון בעבודה ב-Spark
- ניסיון בפיתוח ב-Scala / Python
- ניסיון בעבודה עם Linux, Docker ו-Kubernetes
- ניסיון בפיתוח ב-NodeJS, TypeScript - יתרון המשרה מיועדת לנשים ולגברים כאחד.
 
עוד...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8410150
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/11/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
What Youll Do:
Youll be part of the group responsible for our companys main products. You will impact the work of developers in the group by designing, building and maintaining the core infrastructure of our solution and leading the research and development of new technologies as well as maintaining code standards and practices.
What does the day to day of Infrastructure Engineer at our company look like:
You will be working on our companys core B2B platform that serves tens of thousands of customers, serving hundreds of terabytes in production. Our backend engineers are responsible for the entire data lifecycle - from our endless datatlakes, through choosing the right serving methods and databases, all the way to our api services.
Your role will include:
Design and implement scalable backend services and libraries that are reusable and maintainable, serving as the foundation for various applications across the company.
Build and maintain tools that streamline development workflows, enabling product teams to focus on delivering business value.
Define and promote best practices for code quality, performance, and reliability, ensuring healthy production environments and rapid development cycles.
Lead the adoption and integration of AI tools to assist in code generation, testing, documentation, and debugging, thereby accelerating development processes.
Perform proof-of-concepts (POCs) on emerging technologies, including AI agents and platforms, to assess their applicability and benefits to our development ecosystem.
Drive cross-team technical projects aimed at improving infrastructure scalability, reliability, and developer experience.
Analyze and resolve complex production issues, ensuring minimal downtime and optimal performance.
Contribute to the evolution of our system architecture, ensuring it supports rapid development and scaling needs.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in backend development, with a strong focus on infrastructure and platform engineering.
Proficiency in programming languages such as C#, Python, Java, or Go.
Experience building large-scale infrastructure applications or large-scale web applications.
Experience improving stability of large-scale systems using monitoring, solving bottle-necks and making appropriate changes.
High coding standards, working independently and experience leading long term tech tasks involving many teams and stakeholders.
Experience with cloud platforms (e.g., AWS, GCP, Azure) and container orchestration tools like Kubernetes.
Familiarity with CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible).
Demonstrated experience in integrating and leveraging AI tools to enhance development workflows.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8413715
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/11/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineer to join our team and help advance our Apps solution. Our product is designed to provide detailed and accurate insights into Apps Analytics, such as traffic estimation, revenue analysis, and app characterization. The role involves constructing and maintaining scalable data pipelines, developing and integrating machine learning models, and ensuring data integrity and efficiency. You will work closely with a diverse team of scientists, engineers, analysts, and collaborate with business and product stakeholders.
Key Responsibilities:
Develop and implement complex, innovative big data ML algorithms for new features, working in collaboration with data scientists and analysts.
Optimize and maintain end-to-end data pipelines using big data technologies to ensure efficiency and performance.
Monitor data pipelines to ensure data integrity and promptly troubleshoot any issues that arise.
Requirements:
Bachelor's degree in Computer Science or equivalent practical experience.
At least 3 years of experience in data engineering or related roles.
Experience with big data Machine Learning - a must ! .
Proficiency in Python- must. Scala is a plus.
Experience with Big Data technologies including Spark, EMR and Airflow.
Experience with containerization/orchestration platforms such as Docker and Kubernetes.
Familiarity with distributed computing on the cloud (such as AWS or GCP).
Strong problem-solving skills and ability to learn new technologies quickly.
Being goal-driven and efficient.
Excellent communication skills and ability to work independently and in a team.
Why us?
Work on challenging algorithmic problems and solve real customer issues.
Collaborate with the best engineers and scientists in the industry.
Opportunity to experiment with and implement new technologies on large-scale data.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8413711
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו