דרושים » דאטה » Data engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Merkaz
Job Type: Full Time
abra R&D is looking for a Data engineer! We are looking for a Data Engineer to join the team and contribute to AI-related projects. The role involves handling large volumes of incoming data, performing deep analysis, and collaborating closely with Data Scientists. You will be responsible for designing and developing critical, diverse, and large-scale data pipelines in both cloud and on-premise environments.
Requirements:
Minimum 5 years of experience as a Data Engineer mandatory 5 years of experience working with Object-Oriented Programming (OOP) languages mandatory 5 years of hands-on experience with Python mandatory Hands-on experience with Spark for large-scale data processing mandatory At least 2 years of practical experience with AWS , including services such as Athena, Glue, Step Functions, EMR, Redshift, and RDS mandatory
* Deep understanding of design, development, and optimization of complex solutions handling or processing large-scale data
* Familiarity with optimization techniques and working with data partitioning and formats such as Parquet, Avro, HDF5, Delta Lake
* Experience working with Docker, Linux, CI/CD tools, and Kubernetes
* Experience with data pipeline orchestration tools like Airflow or Kubeflow Bachelor’s degree in Computer Science, Engineering, Mathematics, or Statistics – mandatory
* Understanding of machine learning concepts and workflows
* Familiarity with GenAI solutions or prompt engineering advantage
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8260993
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer.
As a Senior Data Engineer, youll be more than just a coder - youll be the architect of our data ecosystem. Were looking for someone who can design scalable, future-proof data pipelines and connect the dots between DevOps, backend engineers, data scientists, and analysts.
Youll lead the design, build, and optimization of our data infrastructure, from real-time ingestion to supporting machine learning operations. Every choice you make will be data-driven and cost-conscious, ensuring efficiency and impact across the company.
Beyond engineering, youll be a strategic partner and problem-solver, sometimes diving into advanced analysis or data science tasks. Your work will directly shape how we deliver innovative solutions and support our growth at scale.
Responsibilities:
Design and Build Data Pipelines: Architect, build, and maintain our end-to-end data pipeline infrastructure to ensure it is scalable, reliable, and efficient.
Optimize Data Infrastructure: Manage and improve the performance and cost-effectiveness of our data systems, with a specific focus on optimizing pipelines and usage within our Snowflake data warehouse. This includes implementing FinOps best practices to monitor, analyze, and control our data-related cloud costs.
Enable Machine Learning Operations (MLOps): Develop the foundational infrastructure to streamline the deployment, management, and monitoring of our machine learning models.
Support Data Quality: Optimize ETL processes to handle large volumes of data while ensuring data quality and integrity across all our data sources.
Collaborate and Support: Work closely with data analysts and data scientists to support complex analysis, build robust data models, and contribute to the development of data governance policies.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field.
Experience: 5+ years of hands-on experience as a Data Engineer or in a similar role.
Data Expertise: Strong understanding of data warehousing concepts, including a deep familiarity with Snowflake.
Technical Skills:
Proficiency in Python and SQL.
Hands-on experience with workflow orchestration tools like Airflow.
Experience with real-time data streaming technologies like Kafka.
Familiarity with container orchestration using Kubernetes (K8s) and dependency management with Poetry.
Cloud Infrastructure: Proven experience with AWS cloud services (e.g., EC2, S3, RDS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8320416
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Ra'anana
Job Type: Full Time
The ideal candidate is not afraid of data in any form or scale, and is experienced with cloud services to ingest, stream, store, and manipulate data. The Data Engineer will support new system designs and migrate existing ones, working closely with solutions architects, project managers, and data scientists. The candidate must be self-directed, a fast learner, and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or re-designing our customers data architecture to support their next generation of products, data initiatives, and machine learning systems.

Summary of Key Responsibilities:
To meet compliance and regulatory requirements, keep our customers data separated and secure.
Design, Build, and operate the infrastructure required for optimal data extraction, transformation, and loading from a wide variety of data sources using SQL, cloud migration tools, and big data technologies.
Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operational problems.
Design, build, and operate large, complex data lakes that meet functional / non-functional business requirements.
Optimize various data types' ingestion, storage, processing, and retrieval, from near real-time events and IoT to unstructured data such as images, audio, video, documents, and in between.
Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
Requirements:
5+ years of experience in a Data Engineer role in a cloud native ecosystem.
3+ years of experience in AWS Data Services (mandatory)
Bachelor's (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field.
Working experience with the following technologies/tools:
big data tools: Spark, ElasticSearch, Kafka, Kinesis etc.
Relational SQL and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra.
Functional and scripting languages: Python, Java, Scala, etc.
Advanced SQL.
Experience building and optimizing big data pipelines, architectures and data sets.
Working knowledge of message queuing, stream processing, and highly scalable big data stores.
Experience supporting and working with external customers in a dynamic environment.
Articulate with great communication and presentation skills.
Team player who can train as well as learn from others.
Fluency in Hebrew and English is essential.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8311509
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
02/09/2025
חברה חסויה
Job Type: Full Time
Who We're Looking For - The Dream Maker We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena
* Design, develop, and maintain scalable, robust data pipelines and ETL processes
* Architect and implement complex data models across various storage solutions
* Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
* Ensure data quality, consistency, security, and compliance across all data systems
* Play a key role in defining and implementing data strategies that drive business value
* Contribute to the continuous improvement of our data architecture and processes
* Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
* Participate in and sometimes lead code reviews to maintain high coding standards
* Troubleshoot and resolve complex data-related issues in production environments
* Evaluate and recommend new technologies and methodologies to improve our data infrastructure
Requirements:
What It Takes - Must haves: 5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
* Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
* Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
* Designing and implementing data warehouses and data lakes - Must
* Strong understanding of data modeling techniques - Must
* Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
* Experience with data validation tools such as Pydantic & Great Expectations - Must
* Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must Advantages:
* Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
* Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
* Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
* Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
* Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
* Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
* Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
* Experience mentoring junior engineers or leading small project teams
* Excellent communication skills with the ability to explain complex technical concepts to various audiences
* Demonstrated ability to work independently and lead technical initiatives
* Relevant certifications in cloud platforms or data technologies Our Story Chargeflow is a leading force in fintech innovati
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8329908
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.

Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.



Key Job Responsibilities and Duties:

Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.

Dealing with massive textual sources to train GenAI foundation models.

Solving issues with data and data pipelines, prioritizing based on customer impact.

End-to-end ownership of data quality in our core datasets and data pipelines.

Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.

Providing tools that improve Data Quality company-wide, specifically for ML scientists.

Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.

Acting as an intermediary for problems, with both technical and non-technical audiences.

Promote and drive impactful and innovative engineering solutions

Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation

Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.

Req ID: 20718
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.

Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.

You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.

You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)

Strong programming skills in languages such as Python and Java.

Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.

Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.

Experience with Data Warehousing and ETL/ELT pipelines

Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.

Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.

Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.

Experience of working on products that impact a large customer base - an advantage.

Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8350838
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineer to join our team and help advance our Apps solution. Our product is designed to provide detailed and accurate insights into Apps Analytics, such as traffic estimation, revenue analysis, and app characterization. The role involves constructing and maintaining scalable data pipelines, developing and integrating machine learning models, and ensuring data integrity and efficiency. You will work closely with a diverse team of scientists, engineers, analysts, and collaborate with business and product stakeholders.
Key Responsibilities:
Develop and implement complex, innovative big data ML algorithms for new features, working in collaboration with data scientists and analysts.
Optimize and maintain end-to-end data pipelines using big data technologies to ensure efficiency and performance.
Monitor data pipelines to ensure data integrity and promptly troubleshoot any issues that arise.
Requirements:
Bachelor's degree in Computer Science or equivalent practical experience.
At least 3 years of experience in data engineering or related roles.
Experience with big data Machine Learning - a must ! .
Proficiency in Python- must. Scala is a plus.
Experience with Big Data technologies including Spark, EMR and Airflow.
Experience with containerization/orchestration platforms such as Docker and Kubernetes.
Familiarity with distributed computing on the cloud (such as AWS or GCP).
Strong problem-solving skills and ability to learn new technologies quickly.
Being goal-driven and efficient.
Excellent communication skills and ability to work independently and in a team.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8341640
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location:
Job Type: Full Time
abra R&D is looking for a Data engineer! We are looking for a Data Engineer for RT Big Data Systems to join the team and design and deploy scalable, standardized, and maintainable data pipelines that enable efficient logging, error handling, and real-time data enrichment. The role requires strong ownership of both implementation and performance The role includes: Optimize Splunk queries and search performance using best practices Build and manage data ingestion pipelines from sources like Kafka, APIs, and log streams Standardize error structures (error codes, severity levels, categories) Create mappings between identifiers such as session ID, user ID, and service/module components Implement real-time data enrichment processes using APIs, databases, or lookups Set up alerting configurations with thresholds, modules, and logic-based routing Collaborate with developers, DevOps, and monitoring teams to unify logging conventions Document flows and ensure traceability across environments
Requirements:
* Minimum 3 years of hands-on experience in Splunk – Mandatory
* Proficient in SPL, data parsing, dashboards, macros, and performance tuning – Mandatory
* Experience working with event-driven systems (e.g., Kafka, REST APIs) – Mandatory
* Deep understanding of structured/semi-structured data (JSON, XML, logs) – Mandatory
* Strong scripting ability with Python or Bash
* Familiar with CI/CD processes using tools like Git and Jenkins
* Experience with data modeling, enrichment logic, and system integration
* Advantage: familiarity with log schema standards (e.g., ECS, CIM)
* Ability to work independently and deliver production-ready, scalable solutions – Mandatory
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304508
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and English Speakers
We are growing and are looking for a Senior Data Infra Engineer
who value personal and career growth, team-work, and winning!
What your day will look like:
Design, plan, and build all aspects of the platforms data, machine learning (ML) pipelines, and infrastructure.
Build and optimize an AWS-based Data Lake using best practices in cloud architecture, data partitioning, metadata management, and security to support enterprise-scale data operations.
Collaborate with engineers, data analysts, data scientists, and other stakeholders to understand data needs.
Solve challenging data integration problems, utilizing optimal ETL/ELT patterns, frameworks, query techniques, and sourcing from structured and unstructured data sources.
Lead end-to-end data projects from infrastructure design to production monitoring.
Requirements:
Have 5+ years of hands-on experience in designing and maintaining big data pipelines across on-premises or hybrid cloud environments, with proficiency in both SQL and NoSQL databases within a SaaS framework.
Proficient in one or more programming languages: Python, Scala, Java, or Go.
Experienced with software engineering best practices and automation, including testing, code reviews, design documentation, and CI/CD.
Experienced in building and designing ML/AI-driven production infrastructures and pipelines.
Experienced in developing data pipelines and maintaining data lakes on AWS - big advantage.
Familiar with technologies such as Kafka, Snowflake, MongoDB, Airflow, Docker, Kubernetes (K8S), and Terraform - advantage.
Bachelor's degree in Computer Science or equivalent experience.
Strong communication skills, fluent in English, both written and verbal.
A great team player with a can-do approach.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8313520
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer
What You'll Do:

Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8343346
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for our first dedicated Data Engineer a self-motivated and proactive professional with a strong can-do attitude and a sense of ownership. This role involves taking responsibility across all data domains within the company, working closely with our analytics and development teams to build and maintain the data infrastructure that supports business needs. This position is ideal for someone ready to independently lead data engineering efforts and make a meaningful impact.

Responsibilities:
Design, develop, and maintain scalable data pipelines and ETL workflows using tools such as Python, dbt, and Airflow.
Architect and optimize our data warehouse to support efficient analytics, reporting, and business intelligence at scale.
Model and structure data from multiple internal and external sources (such as Salesforce, Jira, Mixpanel, etc.) into clean, reliable, and analytics-ready datasets.
Collaborate closely with our systems architect, analytics, and development teams to translate business requirements into robust and efficient technical data solutions.
Monitor and optimize pipeline performance to ensure data completeness and scalability.
Serve as a key partner and subject-matter expert on all data-related topics within the team.
Implement data quality checks, anomaly detection and validation processes to ensure data reliability.
Requirements:
3+ years of hands-on experience as a Data Engineer or in a similar role.
Expert-level SQL skills, capable of performing complex table transformations and designing efficient data workflows.
Proficiency in Python for data processing and scripting tasks.
Experience building and maintaining ELT/ETL pipelines using dbt.
Hands-on experience with orchestration tools such as Airflow.
Deep understanding of data warehouse concepts and methodologies, including data modeling.
Self-motivated, capable of working autonomously while effectively collaborating with stakeholders to deliver end-to-end solutions.
B.Sc. in Information Systems Engineering, Computer Science, Industrial Engineering, or a related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304059
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
09/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are seeking a Data Engineer to join our dynamic data team. In this role, you will design, build, and maintain robust data systems and infrastructure that support data collection, processing, and analysis. Your expertise will be crucial in developing scalable data pipelines, ensuring data quality, and collaborating with cross-functional teams to deliver actionable insights.

Key Responsibilities:
Design, develop, and maintain scalable ETL processes for data transformation and integration.
Build and manage data pipelines to support analytics and operational needs.
Ensure data accuracy, integrity, and consistency across various sources and systems.
Collaborate with data scientists and analysts to support AI model deployment and data-driven decision-making.
Optimize data storage solutions, including data lakehouses and databases, to enhance performance and scalability..
Monitor and troubleshoot data workflows to maintain system reliability.
Stay updated with emerging technologies and best practices in data engineering.

Please note that this role is on a hybrid model of 4 days/week in our Tel-Aviv office.
Requirements:
Requirements:
3+ years of experience in data engineering or a related role within a production environment.
Proficiency in Python and SQL
Experience with both relational (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB, Elasticsearch).
Familiarity with big data AWS tools and frameworks such as Glue, EMR, Kinesis etc.
Experience with containerization tools like Docker and Kubernetes.
Strong understanding of data warehousing concepts and data modeling.
Excellent problem-solving skills and attention to detail.
Strong communication skills, with the ability to work collaboratively in a team environment.

Preferred Qualifications:
Experience with machine learning model deployment and MLOps practices.
Knowledge of data visualization tools and techniques.
Practical experience democratizing the companys data to enhance decision making.
Bachelors degree in Computer Science, Engineering, or a related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8340045
סגור
שירות זה פתוח ללקוחות VIP בלבד