דרושים » דאטה » Data Engineering Lead (Hands-On)

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 6 שעות
Location: Netanya and Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineering Lead to join our Platform Group at our company. In this role, you will drive the development of scalable data pipelines and infrastructures that are crucial to our platforms success. You will collaborate across departments, and innovate to ensure our data ecosystem is robust, secure, and optimized for growth.
As a Data Engineering Lead at our company you will...
Architect and develop data pipelines Lead the design and implementation of data pipelines that support our company's platform, ensuring high data quality, security, and governance. Introduce new tools and technologies to enhance data workflows and integration
Develop a strategic roadmap that outlines key engineering solutions to support our platforms scalability and performance, aligned with our company's overall vision and objectives
Collaborate across teams Work closely with internal teams including DevOps, BI, Product, and development groups to ensure seamless data integration and drive data-driven decision-making across the organization
Establish data guidelines and documentation Define best practices for data generation, consumption, and management within the platform. Create thorough documentation for all data processes to facilitate clear communication and future maintenance.
Requirements:
5+ years hands-on proven experience designing, building, and optimizing scalable and highly available data-intensive systems
In-depth understanding of big data engines and frameworks, such as Spark, and experience with ETL/ELT tools for robust data pipeline development
Proven ability to lead initiatives and drive technical agendas in a hands-on capacity, with potential for team building in the future
Knowledge of machine learning frameworks and strategic industry trends a plus
Strong business skills and strategic thinking, an innovative and growth mindset
Strong interpersonal skills to collaborate with internal teams and external partners effectively.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255790
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our companys data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives our company toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
In this role youll
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across our company to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases
Ability to work in an office environment a minimum of 3 days a week
Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8206357
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We seek a Director of Data to join us and lead our data group.
As our Director of Data, you will be a key member of our R&D leadership team. You will be responsible for developing and executing a data strategy that aligns with our business goals, overseeing data management, analytics, and validation, and ensuring data integrity at every stage of product development and production.
A day in the life and how youll make an impact:
Define and execute a strategic data roadmap aligned with business objectives, fostering a data-driven culture and leading a high-performing team of data engineers, scientists, and analysts.
Establish robust data validation frameworks, ensuring product integrity and accuracy through all stages, from data acquisition to end-user delivery.
Build and optimize scalable data infrastructure and pipelines to support our data needs and ensure data security, compliance, and accessibility.
Collaborate with product and engineering teams to create and launch data-driven products, ensuring they are built on reliable data and designed to meet customer needs.
Guide the team in generating actionable insights to drive business decisions and product innovation in areas such as personalization, marketing, and customer success.
Implement data governance policies and maintain compliance with industry regulations and best practices.
Requirements:
10+ years of experience in data-related roles, with at least 5 years in a leadership position (ideally within a tech or AI-driven startup environment).
M.Sc. or PhD in Data Science/Computer Science/Engineering/Statistics, or a related field.
Extensive experience with cloud platforms (AWS, GCP, or Azure) and modern data warehouses (Snowflake, BigQuery, or Redshift).
Proficiency in data technologies, such as SQL, Python, R, Looker and big data tools (e.g., Hadoop, Spark).
Proven experience in leveraging data for product development, business intelligence, and operational optimization.
Strong track record of building and managing cross-functional data teams and influencing across all levels of an organization.
Excellent communication skills, with the ability to convey complex data insights in an accessible manner to non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8234801
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer Tech Lead, you will play a central role in shaping our data strategy and guiding the team in building and optimizing analytics data pipelines that power business decisions across the organization. Beyond driving complex data initiatives, youll be a hands-on mentor and technical advisor, supporting other data engineers, fostering a culture of collaboration, and ensuring the teams overall growth and success.
Key Responsibilities:
Lead and mentor a team of data engineers, providing technical guidance, support, and regular feedback to enable their growth.
Architect, design, and evolve robust, scalable data pipelines and ETL/ELT processes for diverse structured and unstructured data sources.
Serve as a technical advisor for data best practices across the organization, collaborating with data analysts, data scientists, engineers, and product managers.
Define and evolve analytics-focused data models that empower business stakeholders with timely and reliable data.
Drive initiatives to improve data quality, governance, and observability across the analytics stack.
Act as a bridge between technical and non-technical stakeholders, helping translate business needs into actionable data solutions.
Champion engineering excellence, including code reviews, documentation, and knowledge-sharing.
What your day might look like:
Leading the most complicated data projects our company has to offer Hands on
Supporting other engineers by reviewing code, troubleshooting architecture, and encouraging best practices.
Leading the technical direction of new pipelines or data modeling initiatives.
Meeting with product managers and analysts to align on priorities and define the right data solutions.
Guiding your team through design discussions, trade-offs, and implementation decisions.
Helping the team deliver with high reliability, scalability, and speed.
Driving initiatives that improve the maintainability, observability, and impact of the data platform.
Requirements:
7+ years of experience in data engineering or related roles, with a proven track record of technical leadership and team support.
Expertise in SQL and advanced proficiency in Python for data workflows.
Extensive experience designing and implementing scalable, analytics-ready data models and cloud data warehouses.
Hands-on expertise with data orchestration and transformation frameworks (e.g., Airflow, Luigi, AWS Glue, DBT).
Strong familiarity with cloud data services (e.g., AWS, GCP, Azure).
Demonstrated ability to lead and support teams, communicate effectively, and foster collaboration across functions.
Bonus Points:
Experience mentoring or managing engineers in a fast-paced environment.
Strong understanding of business metrics and how to translate data into strategic insights.
Familiarity with data governance and observability best practices.
Passion for building data culture and driving adoption of data tools across the company.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8220213
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
19/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking talented and passionate Data Engineer to join our growing Data team. In this pivotal role, you will be instrumental in designing, building, and optimizing the critical data infrastructure that underpins innovative creative intelligence platform. You will tackle complex data challenges, ensuring our systems are robust, scalable, and capable of delivering high-quality data to power our advanced AI models, customer-facing analytics, and internal business intelligence. This is an opportunity to make a significant impact on our product, contribute to a data-driven culture, and help solve fascinating problems at the intersection of data, AI, and marketing technology.
Key Responsibilities
Architect & Develop Data Pipelines: Design, implement, and maintain sophisticated, end-to-end data pipelines for ingesting, processing, validating, and transforming large-scale, diverse datasets.
Manage Data Orchestration: Implement and manage robust workflow orchestration for complex, multi-step data processes, ensuring reliability and visibility.
Advanced Data Transformation & Modeling: Develop and optimize complex data transformations using advanced SQL and other data manipulation techniques. Contribute to the design and implementation of effective data models for analytical and operational use.
Ensure Data Quality & Platform Reliability: Establish and improve processes for data quality assurance, monitoring, alerting, and performance optimization across the data platform. Proactively identify and resolve data integrity and pipeline issues.
Cross-Functional Collaboration: Partner closely with AI engineers, product managers, developers, customer success and other stakeholders to understand data needs, integrate data solutions, and deliver features that provide exceptional value.
Drive Data Platform Excellence: Contribute to the evolution of our data architecture, champion best practices in data engineering (e.g., DataOps principles), and evaluate emerging technologies to enhance platform capabilities, stability, and cost-effectiveness.
Foster a Culture of Learning & Impact: Actively share knowledge, contribute to team growth, and maintain a strong focus on how data engineering efforts translate into tangible product and business outcomes.
Requirements:
3+ years of experience as a Data Engineer, building and managing complex data pipelines and data-intensive applications.
Solid understanding and application of software engineering principles and best practices. Proficiency in a relevant programming language (e.g., Python, Scala, Java) is highly desirable.
Deep expertise in writing, optimizing, and troubleshooting complex SQL queries for data transformation, aggregation, and analysis in relational and analytical database environments.
Hands-on experience with distributed data processing systems, cloud-based data platforms, data warehousing concepts, and workflow management tools.
Strong ability to diagnose complex technical issues, identify root causes, and develop effective, scalable solutions.
A genuine enthusiasm for tackling new data challenges, exploring innovative technologies, and continually expanding your skillset.
A keen interest in understanding how data powers product features and drives business value, with a focus on delivering results.
Excellent ability to communicate technical ideas clearly and work effectively within a multi-disciplinary team environment.
Advantages:
Familiarity with the marketing/advertising technology domain and associated datasets.
Experience with data related to creative assets, particularly video or image analysis.
Understanding of MLOps principles or experience supporting machine learning workflows.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8223467
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly skilled and experienced BackEnd Team Lead to join our dynamic team in Tel Aviv. In this pivotal role, you will oversee the design, development, and optimization of our infrastructure, ensuring efficient flow and quality to support data-driven decision-making across the organization. You will lead a team of backend developers and data engineers, collaborate with cross-functional teams, and drive initiatives to enhance our data collection and data pipeline capabilities.
Key Responsibilities:
Lead and oversee a team of BackEnd and Data Engineers, mentor the team members, fostering skill development and professional growth
Establish and enforce data quality standards, implementing processes for data accuracy, reconciliation, and consistency.
Oversee the design and implementation of robust ETL processes to ensure accurate and timely data integration from various sources.
Collaborate across the business with various stakeholders, such as the CTO, Data Analysts and Product Managers to deliver impactful solutions
Continuously monitor and optimize system flows for performance, scalability, and cost-efficiency.
Requirements:
At least 7 years of experience in backend or data engineering, with a minimum of 3 years in a team lead role.
Experience with development, deployment and automation on AWS - S3, SNS, lambda, batch, dynamo, RDS etc.
Expertise in advanced SQL, including writing complex, efficient queries and building optimized datasets.
Proficient in Python and Python-based data tools.
Working with micro-services on K8S
Proven track record of delivering large-scale, high-quality products
Independent, autodidact with excellent problem solving and passion to learn cutting-edge technologies
Proven ability to lead and mentor a team, manage projects, and collaborate effectively with cross-functional teams.
Outstanding interpersonal skills
Advantage:
Experience in building data stream pipelines and ETL using Big Data frameworks such as Spark and Apache Airflow
Experience with Serverless frameworks
Experience with streaming technologies such as Amazon Kinesis and Apache Kafka.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8217961
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 7 שעות
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
As a Big Data & GenAI Engineering Lead within our company's Data & AI Department, you will play a pivotal role in building the data and AI backbone that empowers product innovation and intelligent business decisions. You will lead the design and implementation of our companys next-generation lakehouse architecture, real-time data infrastructure, and GenAI-enriched solutions, helping drive automation, insights, and personalization at scale. In this role, you will architect and optimize our modern data platform while also integrating and operationalizing Generative AI models to support go-to-market use cases. This includes embedding LLMs and vector search into core data workflows, establishing secure and scalable RAG pipelines, and partnering cross-functionally to deliver impactful AI applications.
As a Big Data & GenAI Engineering Lead in our company you will...
Design, lead, and evolve our companys petabyte-scale Lakehouse and modern data platform to meet performance, scalability, privacy, and extensibility goals.
Architect and implement GenAI-powered data solutions, including retrieval-augmented generation (RAG), semantic search, and LLM orchestration frameworks tailored to business and developer use cases.
Partner with product, engineering, and business stakeholders to identify and develop AI-first use cases, such as intelligent assistants, code insights, anomaly detection, and generative reporting.
Integrate open-source and commercial LLMs securely into data products using frameworks such as LangChain, or similar, to augment AI capabilities into data products.
Collaborate closely with engineering teams to drive instrumentation, telemetry capture, and high-quality data pipelines that feed both analytics and GenAI applications.
Provide technical leadership and mentorship to a cross-functional team of data and ML engineers, ensuring adherence to best practices in data and AI engineering.
Lead tool evaluation, architectural PoCs, and decisions on foundational AI/ML tooling (e.g., vector databases, feature stores, orchestration platforms).
Foster platform adoption through enablement resources, shared assets, and developer-facing APIs and SDKs for accessing GenAI capabilities.
Requirements:
8+ years of experience in data engineering, software engineering, or MLOps, with hands-on leadership in designing modern data platforms and distributed systems.
Proven experience implementing GenAI applications or infrastructure (e.g., building RAG pipelines, vector search, or custom LLM integrations).
Deep understanding of big data technologies (Kafka, Spark, Iceberg, Presto, Airflow) and cloud-native data stacks (e.g., AWS, GCP, or Azure).
Proficiency in Python and experience with GenAI frameworks like LangChain, LlamaIndex, or similar.
Familiarity with modern ML toolchains and model lifecycle management (e.g., MLflow, SageMaker, Vertex AI).
Experience deploying scalable and secure AI solutions with proper attention to privacy, hallucination risk, cost management, and model drift.
Ability to operate in ambiguity, lead complex projects across functions, and translate abstract goals into deliverable solutions.
Excellent communication and collaboration skills, with a passion for pushing boundaries in both data and AI domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255562
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
What You'll Do:
Shape the Future of Data- Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
Bonus points:
Hands-on experience with our stack: Databricks, Delta Lake, Kafka, Docker, Airflow, Terraform, and AWS.
Experience in building self-serve data platforms and improving developer experience across the organization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8204175
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/07/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are making the future of Mobility come to life starting today.
At our company we support the worlds largest vehicle fleet operators and transportation providers to optimize existing operations and seamlessly launch new, dynamic business models - driving efficient operations and maximizing utilization.
At the heart of our platform lies the data infrastructure, driving advanced machine learning models and optimization algorithms. As the owner of data pipelines, you'll tackle diverse challenges spanning optimization, prediction, modeling, inference, transportation, and mapping.
As a Senior Data Engineer, you will play a key role in owning and scaling the backend data infrastructure that powers our platformsupporting real-time optimization, advanced analytics, and machine learning applications.
What You'll Do
Design, implement, and maintain robust, scalable data pipelines for batch and real-time processing using Spark, and other modern tools.
Own the backend data infrastructure, including ingestion, transformation, validation, and orchestration of large-scale datasets.
Leverage Google Cloud Platform (GCP) services to architect and operate scalable, secure, and cost-effective data solutions across the pipeline lifecycle.
Develop and optimize ETL/ELT workflows across multiple environments to support internal applications, analytics, and machine learning workflows.
Build and maintain data marts and data models with a focus on performance, data quality, and long-term maintainability.
Collaborate with cross-functional teams including development teams, product managers, and external stakeholders to understand and translate data requirements into scalable solutions.
Help drive architectural decisions around distributed data processing, pipeline reliability, and scalability.
Requirements:
4+ years in backend data engineering or infrastructure-focused software development.
Proficient in Python, with experience building production-grade data services.
Solid understanding of SQL
Proven track record designing and operating scalable, low-latency data pipelines (batch and streaming).
Experience building and maintaining data platforms, including lakes, pipelines, and developer tooling.
Familiar with orchestration tools like Airflow, and modern CI/CD practices.
Comfortable working in cloud-native environments (AWS, GCP), including containerization (e.g., Docker, Kubernetes).
Bonus: Experience working with GCP
Bonus: Experience with data quality monitoring and alerting
Bonus: Strong hands-on experience with Spark for distributed data processing at scale.
Degree in Computer Science, Engineering, or related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8238970
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer for the Insight Team to join our Data Group and a new team responsible for developing innovative features based on multiple layers of data. These features will power recommendation systems, insights, and more. This role involves close collaboration with the core teams within the Data Group, working on diverse data pipelines that tackle challenges related to scale and algorithmic optimization, all aimed at enhancing the data experience for our customers.
Where does this role fit in our vision?
Every role at our company is designed with a clear purposeto integrate collective efforts into our shared success, functioning as pieces of a collective brain. Data is everything, its at the heart of everything we do. The Data Group is responsible for shaping the experience of hundreds of thousands of users who rely on our data daily.
The Insight Team monitors user behavior across our products, leveraging millions of signals and time-series entities to power a personalized recommendation and ranking system. This enables users to access more unique and tailored data, optimizing their experience while maintaining a strong focus on the key KPIs that drive the success of our Data Group.
What will you be responsible for?
Develop and implement robust, scalable data pipelines and integration solutions within our Databricks-based environment.
Develop models and implement algorithms, with a strong emphasis on delivering high-quality results.
Leverage technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
Design innovative data solutions that support millions of data points, ensuring high performance and reliability.
Requirements:
3+ years of experience in data engineering, building and optimizing scalable data pipelines.
5+ years of experience as a software developer, preferably in Python.
Algorithmic experience, including developing and optimizing machine learning models and implementing advanced data algorithms.
Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or comparable cloud environments (Azure/GCP).
Expertise in extracting, ingesting, and transforming large datasets efficiently.
Deep knowledge of big data platforms, such as Spark, Databricks, Elasticsearch, and Kafka for real-time data streaming.
(Nice-to-have) Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.
AI-savvy: comfortable working with AI tools and staying ahead of emerging trends.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8212720
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 6 שעות
חברה חסויה
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
Were looking for a Senior Data Product Manager to lead the strategy and design of B2B data products that empower our companys sales, marketing, finance, and customer success teams with intelligent, insight-driven tools. This role is key to transforming commercial and customer data into trusted, high-quality, and meaningful Data & AI solutions that unlock business growth, optimize the customer journey, and enable AI-powered decision-making.
You will partner with business & data professionals stakeholders to understand their data needs and design platforms & solutions that drive business value, leveraging cutting-edge Data & AI technologies. This is an opportunity to shape the future of data and AI at our company and make a lasting impact.
As a Senior Data Product Manager in our company you will
Own the strategy and roadmap for commercial-facing Data & AI products, delivering actionable insights across the customer journey to support demand generation, sales pipeline optimization, customer success, and account growth.
Translate business goals and analytical needs into clear data product and platform requirements that support our companys evolution into a truly data-driven organization.
Turn account data from across the B2B commercial funnel into intuitive tools, dashboards, and data services that help teams make smarter decisions and identify revenue opportunities.
Partner with AI and engineering teams to build GenAI applications that assist sales reps and CS managers with contextual, real-time recommendations, customer insights, and next-best-actions.
Define and drive the structure for core business entities such as accounts, products, and usersensuring consistency and clarity across all data products through strong master data foundations.
Collaborate with data governance, privacy, and legal stakeholders to ensure that data products are compliant, secure, and high-quality, while building confidence in the data across the business.
Requirements:
5+ years of experience in product management, with at least 3 years focused on data products in a B2B SaaS or enterprise tech environment.
Previous background as BI developer or data analyst or Data engineer.
Proven ability to design and deliver data products that serve commercial functions such as sales, marketing, customer success, or GTM operations.
Strong understanding of B2B funnels, account-based models, and customer journey data across the lifecycle.
Experience with data modeling, master data management, analysis, and structuring core business entities such as accounts, users, products, and usage.
Familiarity with GenAI/AI-driven product capabilities, including building or collaborating on intelligent assistants, recommendations, or automated insights.
Solid grasp of data governance, privacy, and compliance frameworks (e.g., GDPR, CCPA), with experience ensuring data products meet legal and ethical standards.
Ability to work with technical teams and understand modern data architectures (e.g., data lakes, analytics layers), even if not directly responsible for infrastructure.
Comfortable defining metrics, managing roadmaps, and prioritizing based on business value and impact.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255924
סגור
שירות זה פתוח ללקוחות VIP בלבד