דרושים » דאטה » Data Engineer - RT Big Data Systems

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Merkaz
Job Type: Full Time
abra R&D is looking for a Data engineer! We are looking for a Data Engineer for RT Big Data Systems to join the team and design and deploy scalable, standardized, and maintainable data pipelines that enable efficient logging, error handling, and real-time data enrichment. The role requires strong ownership of both implementation and performance The role includes: Optimize Splunk queries and search performance using best practices Build and manage data ingestion pipelines from sources like Kafka, APIs, and log streams Standardize error structures (error codes, severity levels, categories) Create mappings between identifiers such as session ID, user ID, and service/module components Implement real-time data enrichment processes using APIs, databases, or lookups Set up alerting configurations with thresholds, modules, and logic-based routing Collaborate with developers, DevOps, and monitoring teams to unify logging conventions Document flows and ensure traceability across environments
Requirements:
* Minimum 3 years of hands-on experience in Splunk – Mandatory
* Proficient in SPL, data parsing, dashboards, macros, and performance tuning – Mandatory
* Experience working with event-driven systems (e.g., Kafka, REST APIs) – Mandatory
* Deep understanding of structured/semi-structured data (JSON, XML, logs) – Mandatory
* Strong scripting ability with Python or Bash
* Familiar with CI/CD processes using tools like Git and Jenkins
* Experience with data modeling, enrichment logic, and system integration
* Advantage: familiarity with log schema standards (e.g., ECS, CIM)
* Ability to work independently and deliver production-ready, scalable solutions – Mandatory
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304508
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer
What You'll Do:

Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8343346
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll:

Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams.

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights.

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance.

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights.

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions.

Collaborate closely with other Staff Engineers across us to align on cross-organizational initiatives and technical strategies.

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions.

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas.

A B.Sc. in Computer Science or a related technical field (or equivalent experience).

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions.

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines.

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage.

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions.

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases.

Ability to work in an office environment a minimum of 3 days a week.

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8358644
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/09/2025
חברה חסויה
Location: Netanya
Job Type: Full Time
We are a fast-growing global medical device company, developing and manufacturing innovative drug delivery and infusion solutions across the continuum of care - from the hospital to the home. We are looking for an excellent data Engineer to join the winning team!
Job Description
We are looking to hire a highly skilled and experienced professional to fill the role of data Engineer. In this role, you will design, build, and maintain the data infrastructure that powers analytics, product innovation, and business decision-making across the organization. You will work closely with data scientists, data analysts, software engineers, product managers, and other teams to ensure reliable, secure and scalable access to data. We are seeking a detail-oriented professional with strong problem-solving skills, a passion for working with complex real-world data, and the drive to contribute to meaningful innovations in healthcare. We are looking for someone who is self-driven, brings a can-do spirit, and thrives in a collaborative, fast-paced environment where teamwork and clear communication are essential. Job Responsibilities: Design, develop, and maintain scalable ETL /ELT pipelines for ingesting, processing, and storing data from multiple sources Build and optimize data warehouses, data lakes, and other Storage solutions to support analytics and Machine Learning use cases. Collaborate with data scientists and analysts to ensure datasets are clean, structured, and accessible for advanced modeling and reporting. Implement data quality monitoring, validation frameworks, and automated workflows to ensure reliability and accuracy. Integrate data from disparate systems into unified views that support business intelligence and operational efficiency. Ensure compliance with healthcare data regulations and implement security measures to safeguard company data and systems. Research and adopt best practices in data engineering, cloud-native architectures, and modern data tooling. Support the deployment of data -driven applications into production environments.
Requirements:
Job Requirements:
Must Have:
* 2-4 years of hands-on experience in data engineering or a related field.
* Proficiency in Python, SQL, and data pipeline frameworks (e.g., Airflow, dbt, Azure data Factory).
* Experience with data warehouse technologies (e.g., Snowflake, BigQuery, Azure Synapse).
* Familiarity with cloud platforms (preferably Azure).
* Knowledge of data modeling, database design, and performance optimization.
* Experience building APIs or integrating data across different applications.
* Self learning ability and Can do attitude. Nice to Have / Advantage:
* Knowledge and hands-on experience with Elasticsearch for data indexing, search, and analytics.
* Familiarity with distributed data processing (e.g., Spark).
* Familiarity with event-driven architectures and streaming technologies (e.g., Kafka / RabbitMQ).
* Experience with NoSQL databases and engines (e.g., MongoDB, Redis).
* Experience with containerization and orchestration (e.g., Docker, Kubernetes).
* Background in healthcare or regulated environments (e.g., HIPAA, MDR, FDA). Education Requirements
* Bachelors degree in Computer Science, Information Systems, Engineering, or a related field An Advantage. Language skills Fluent English - writing and verbal.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8335614
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer, Product Analytics
As a Data Engineer, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining us, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights visually in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
Influence product and cross-functional teams to identify data opportunities to drive impact.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
7+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
7+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8352021
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/09/2025
חברה חסויה
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.

Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.

Your Arena:
Design, develop, and maintain scalable, robust data pipelines and ETL processes.
Architect and implement complex data models across various storage solutions.
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions.
Ensure data quality, consistency, security, and compliance across all data systems.
Play a key role in defining and implementing data strategies that drive business value.
Contribute to the continuous improvement of our data architecture and processes.
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges.
Participate in and sometimes lead code reviews to maintain high coding standards.
Troubleshoot and resolve complex data-related issues in production environments.
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
What It Takes - Must haves:
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must.
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must.
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must.
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must.
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must.
Experience with data validation tools such as Pydantic & Great Expectations - Must.
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must.

Advantages:
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage.
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage.
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage.
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage.
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage.
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage.
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams.
Excellent communication skills with the ability to explain complex technical concepts to various audiences.
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8353703
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are seeking a Data Engineer to join our dynamic data team. In this role, you will design, build, and maintain robust data systems and infrastructure that support data collection, processing, and analysis. Your expertise will be crucial in developing scalable data pipelines, ensuring data quality, and collaborating with cross-functional teams to deliver actionable insights.

Key Responsibilities:
Design, develop, and maintain scalable ETL processes for data transformation and integration.
Build and manage data pipelines to support analytics and operational needs.
Ensure data accuracy, integrity, and consistency across various sources and systems.
Collaborate with data scientists and analysts to support AI model deployment and data-driven decision-making.
Optimize data storage solutions, including data lakehouses and databases, to enhance performance and scalability..
Monitor and troubleshoot data workflows to maintain system reliability.
Stay updated with emerging technologies and best practices in data engineering.

Please note that this role is on a hybrid model of 4 days/week in our Tel-Aviv office.
Requirements:
Requirements:
3+ years of experience in data engineering or a related role within a production environment.
Proficiency in Python and SQL
Experience with both relational (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB, Elasticsearch).
Familiarity with big data AWS tools and frameworks such as Glue, EMR, Kinesis etc.
Experience with containerization tools like Docker and Kubernetes.
Strong understanding of data warehousing concepts and data modeling.
Excellent problem-solving skills and attention to detail.
Strong communication skills, with the ability to work collaboratively in a team environment.

Preferred Qualifications:
Experience with machine learning model deployment and MLOps practices.
Knowledge of data visualization tools and techniques.
Practical experience democratizing the companys data to enhance decision making.
Bachelors degree in Computer Science, Engineering, or a related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8340045
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Job Type: Full Time
Required Data Engineer
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities:
Design and build data solutions that support our core business goals, from enabling capital market transactions (loan sales and securitization) to providing reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have:
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
What Else:
Energetic and data-enthusiastic mindset.
Ability to translate complex technical work into business impact.
Analytical and detail-oriented.
Strong communication skills with both technical and business teams.
Self-motivated, fast learner, and team player.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8367166
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Job Type: Full Time
Required Data Infrastructure Engineer
What Youll Do:
Design, implement, and enhance robust and scalable infrastructure that enables efficient deployment, monitoring, and management of machine learning models in production. In this role, you will bridge the gap between research and production environments, streamline data and feature pipelines, optimize model serving, and ensure governance and reproducibility across our ML lifecycle.
Responsibilities:
Decouple data prep from model training to accelerate experimentation and deployment
Build efficient data workflows with versioning, lineage, and optimized resource use (e.g., Snowflake, Dask, Airflow)
Develop reproducible training pipelines with MLflow, supporting GPU and distributed training
Automate and standardize model deployment with pre-deployment testing (E2E, dark mode)
Maintain a model repository with traceability, governance, and consistent metadata
Monitor model performance, detect drift, and trigger alerts across the ML lifecycle
Enable model comparison with A/B testing and continuous validation
Support infrastructure for deploying LLMs, embeddings, and advanced ML use cases
Manage a unified feature store with history, drift detection, and centralized feature/label tracking
Establish a single source of truth for features across research and production across research and production.
Requirements:
3+ years of experience as an MLOps, ML Infrastructure, or Software Engineer in ML-driven environments, preferably with PyTorch.
Strong proficiency in Python, SQL (leveraging platforms like Snowflake and RDS), and distributed computing frameworks (e.g., Dask, Spark) for processing large-scale data in formats like Parquet.
Hands-on experience with feature stores, key-value stores like Redis, MLflow (or similar tools), Kubernetes, Docker, cloud infrastructure (AWS, specifically S3 and EC2), and orchestration tools (Airflow).
Proven ability to build and maintain scalable and version-controlled data pipelines, including real-time streaming with tools like Kafka.
Experience in designing and deploying robust ML serving infrastructures with CI/CD automation.
Familiarity with monitoring tools and practices for ML systems, including drift detection and model performance evaluation.
Nice to Have:
Experience with GPU optimization frameworks and distributed training.
Familiarity with advanced ML deployments, including NLP and embedding models.
Knowledge of data versioning tools (e.g., DVC) and infrastructure-as-code practices.
Prior experience implementing structured A/B testing or dark mode deployments for ML models.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8367169
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Analytics Engineer
Tel Aviv
Want to shape how data drives product decisions?
As an Analytics Engineer, youll design the foundations of our data infrastructure, build robust and scalable data models, and empower teams across the organization with actionable insights that fuel our product direction.
As an Analytics Engineer, you will:
Design and implement robust data models to transform raw data into analytics-ready tables, enabling confident decision-making across product and business teams.
Own and maintain our dbt pipelines with a focus on scalability, modularity, and clear documentation.
Continuously evolve our data models to reflect changing business logic and product needs.
Build and maintain comprehensive testing infrastructure to ensure data accuracy and trust.
Monitor the health of our data pipelines, ensuring integrity in event streams and leading resolution of data issues.
Collaborate closely with analysts, data engineers, and product managers to align data architecture with business goals.
Guide the analytics code development process using Git and engineering best practices.
Create dashboards and reports in Tableau that turn insights into action.
Drive performance and cost optimization across our data stack, proactively improving scalability and reliability.
Requirements:
You should apply if you are:
A data professional with 4+ years of experience in analytics engineering, BI development, or similar data roles.
Highly skilled in SQL, with hands-on experience using Snowflake or similar cloud data warehouses.
Proficient in DBT for data transformation, modeling, and documentation.
Experienced with Tableau or similar BI tools for data visualization.
Familiar with CI/CD for data workflows, version control systems (e.g., Git), and testing frameworks.
A strong communicator who can collaborate effectively with both technical and non-technical stakeholders.
Holding a B.Sc. in Industrial Engineering, Computer Science, or a related technical field.
Passionate about translating complex data into clear, scalable insights that drive product innovation.
Bonus points if you have:
Experience with event instrumentation and user behavioral data.
Scripting ability in Python for automation and data processing.
Familiarity with modern data stack tools such as Airflow, Fivetran, Looker, or Segment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8361241
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
The opportunity
Join our dynamic Data & ML Engineering team in iAds and play a pivotal role in driving data solutions that empower data science, finance, analytics, and R&D teams. As an Experienced Data Engineer, you'll work with cutting-edge technologies to design scalable pipelines, ensure data quality, and process billions of data points into actionable insights.
Success Indicators:
In the short term, success means delivering reliable, high-performance data pipelines and ensuring data quality across the product. Long-term, you'll be instrumental in optimizing workflows, enabling self-serve analytics platforms, and supporting strategic decisions through impactful data solutions.
Impact:
Your work will directly fuel business decisions, improve data accessibility and reliability, and contribute to the team's ability to handle massive-scale data challenges. You'll help shape the future of data engineering within a global, fast-paced environment.
Benefits and Opportunities
You'll collaborate with talented, passionate teammates, work on exciting projects with cutting-edge technologies, and have opportunities for professional growth. Competitive compensation, comprehensive benefits, and an inclusive culture make this role a chance to thrive and make a global impact.
What you'll be doing
Designing and developing scalable data pipelines and ETL processes to process massive amounts of structured and unstructured data.
Collaborating with cross-functional teams (data science, finance, analytics, and R&D) to deliver actionable data solutions tailored to their needs.
Building and maintaining tools and frameworks to monitor and improve data quality across the product.
Providing tools and insights that empower product teams with real-time analytics and data-driven decision-making capabilities.
Optimizing data workflows and architectures for performance, scalability, and cost efficiency using cutting-edge technologies like Apache Spark and Flink.
Requirements:
4+ yeasrs of experience as a Data Engineer
Expertise in designing and developing scalable data pipelines, ETL processes, and data architectures.
Proficiency in Python and SQL, with hands-on experience in big data technologies like Apache Spark and Hadoop.
Advanced knowledge of cloud platforms (AWS, Azure, or GCP) and their associated data services.
Experience working with Imply and Apache Druid for real-time analytics and query optimization.
Strong analytical skills and ability to quickly learn and adapt to new technologies and tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8341692
סגור
שירות זה פתוח ללקוחות VIP בלבד