רובוט
היי א אי
stars

תגידו שלום לתפקיד הבא שלכם

לראשונה בישראל:
המלצות מבוססות AI שישפרו
את הסיכוי שלך למצוא עבודה

מהנדס/ת דאטה/DATA ENGINEER

אני עדיין אוסף
מידע על תפקיד זה

לעדכן אותך כשהכל מוכן?

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
כל מה שרציתם לדעת על מבחני המיון ולא העזתם לשאול
זומנתם למבחני מיון ואין לכם מושג לקראת מה אתם ה...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/10/2025
Location: Netanya
Job Type: Full Time
we are a global pioneer of RADAR systems for active military protection, counter-drone applications, critical infrastructure protection, and border surveillance.
Were seeking a Data Tech Lead to drive technical excellence in data engineering and analytics. As the go-to expert, youll set the technical direction, optimize data pipelines, and tackle programming challengesclosing knowledge gaps, solving data-related questions, and streamlining operations. Youll also design scalable architectures, manage ETL workflows, and enhance data processing efficiency.
Key Responsibilities:
Oversee the technical aspects of data projects by making architectural and design decisions.
Streamline existing operations and implement improvements with the teams collaboration.
Guiding team members in technical matters, and supervising system modifications.
Conducting Code reviews for data analysts, BI Analysts and data engineers.
Bridge technical knowledge gaps within the data team, answering critical product-related questions.
Requirements:
5+ years of experience in data engineering & Big Data Analytics.
Data Engineering & Automation: Building robust, production-ready data pipelines using SQL, Python, and PySpark, while managing ETL workflows and orchestrating data processes with Airflow (unmanaged) and Databricks.
Big Data Analysis & Distributed Processing: Expertise in Databricks (Spark, etc.) for handling large-scale data analytics with optimized efficiency.
Cloud Infrastructure: Proficient in Cloud Services (preferably Azure) for data storage and processing.
Data Architecture: Expertise in data architecture to ensure best practices in scaling, cost efficiency, and performance optimization.
If youre passionate about building scalable data solutions and thrive in a fast-paced environment, wed love to hear from you!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8363365
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
30/09/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are international Multi-Cloud experts, utilizing the power of the cloud for smart digital transformation. With 5 sites over 4 continents around the globe, +450 experts, +1000 customers, and +30 years of proven experience, our mission is to deliver the best Multi-Cloud service to our customers, accelerate their business and help them grow. As tech-savvies, To help our customers stay on top of their game, our teams are constantly developing new strategies and tools that will help them improve cloud performance, spending, visibility, control, and automation. Our cloud experts will make any digital transformation a quick, smart, and easy process.

What You'll Do:
Design, build, and maintain data pipelines and infrastructure.
Develop and implement data quality checks and monitoring processes.
Work with engineers to integrate data into our systems and applications.
Collaborate with scientists and analysts to understand their data needs.
Requirements:
3 years of experience as a Data Engineer or a related role.
Experience with big data technologies such as Hadoop, Spark, or Elastic Search.
Proven experience in designing, building, and maintaining data pipelines and infrastructure.
Service in Unit 8200 or another technology unit- An Advance.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8362911
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Analytics Engineer
Tel Aviv
Want to shape how data drives product decisions?
As an Analytics Engineer, youll design the foundations of our data infrastructure, build robust and scalable data models, and empower teams across the organization with actionable insights that fuel our product direction.
As an Analytics Engineer, you will:
Design and implement robust data models to transform raw data into analytics-ready tables, enabling confident decision-making across product and business teams.
Own and maintain our dbt pipelines with a focus on scalability, modularity, and clear documentation.
Continuously evolve our data models to reflect changing business logic and product needs.
Build and maintain comprehensive testing infrastructure to ensure data accuracy and trust.
Monitor the health of our data pipelines, ensuring integrity in event streams and leading resolution of data issues.
Collaborate closely with analysts, data engineers, and product managers to align data architecture with business goals.
Guide the analytics code development process using Git and engineering best practices.
Create dashboards and reports in Tableau that turn insights into action.
Drive performance and cost optimization across our data stack, proactively improving scalability and reliability.
Requirements:
You should apply if you are:
A data professional with 4+ years of experience in analytics engineering, BI development, or similar data roles.
Highly skilled in SQL, with hands-on experience using Snowflake or similar cloud data warehouses.
Proficient in DBT for data transformation, modeling, and documentation.
Experienced with Tableau or similar BI tools for data visualization.
Familiar with CI/CD for data workflows, version control systems (e.g., Git), and testing frameworks.
A strong communicator who can collaborate effectively with both technical and non-technical stakeholders.
Holding a B.Sc. in Industrial Engineering, Computer Science, or a related technical field.
Passionate about translating complex data into clear, scalable insights that drive product innovation.
Bonus points if you have:
Experience with event instrumentation and user behavioral data.
Scripting ability in Python for automation and data processing.
Familiarity with modern data stack tools such as Airflow, Fivetran, Looker, or Segment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8361241
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
Does building the next generation of AI/ML platforms excite you?
Do Big Data challenges and open-source innovation speak your language?
Join our Aegis team
Aegis is an end-to-end Big Data AI/ML platform built on top of Linode. It bridges the gap between research and production, enabling teams to innovate faster and deliver ML products more efficiently. We leverage open-source tools and AI accelerators to avoid reinventing the wheel integrating across platforms where it matters most.
Make a difference in your own way
You'll join our growing Engineering group, working hands-on to shape a powerful, flexible ML platform that supports real-world, large-scale data and AI workflows. You'll be a key contributor to an environment that spans Kubernetes, Spark, MLFlow, JupyterHub, and PyTorch and that pushes the boundaries of performance and collaboration.
In this role you will be:
Designing and implementing scalable data and ML pipelines using Spark, MLFlow, and Kubernetes
Building platform components in Python or Scala to connect research and production environments
Integrating with orchestration tools such as Argo Workflows (or equivalents)
Supporting hybrid data environments across S3/ADLS and cloud/on-prem compute
Collaborating with researchers and MLOps engineers to deliver reproducible and high-performance workflows.
Requirements:
To be successful in this role you will:
Have 4+ years of experience in backend or data platform development
Be proficient in Python or Scala
Have experience with at least one of: Spark, MLFlow, or Kubernetes
Understand principles of Big Data processing and distributed systems
Bring familiarity with the ML lifecycle and related tools (Feature Stores, model tracking, etc.)
Have experience with Argo Workflows or similar orchestration tools
Be motivated, curious, and thrive in a collaborative and high-ownership environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8360954
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Herzliya
Job Type: Full Time
We are seeking a seasoned and strategic Head of BI to lead our Business Intelligence group.
In this pivotal role, you will be responsible for overseeing the full BI stackfrom data infrastructure and pipelines to analytics and reportingwhile managing a team of data engineers and analysts.
You will drive the design and execution of our data strategy, ensure delivery of trusted insights across the organization, and lead the team responsible for building and maintaining our data platform and analytics capabilities. This is a hands-on leadership role requiring a balance of technical depth, managerial experience, and strategic thinking.Responsibilities
What Youll Be Doing:
Lead, mentor, and scale a cross-functional team of BI Analysts and Data Engineers.
Own and evolve our data warehouse architecture, ensuring integrity, scalability, and performance (Snowflake preferred).
Oversee the design and development of ETL/ELT pipelines and semantic data models using tools such as dbt, Python, and Airflow.
Collaborate with senior stakeholders to align business goals with data priorities, translating requirements into scalable solutions.
Define and implement best practices across the entire BI lifecycle: data ingestion, transformation, modeling, visualization, and governance.
Deliver robust, actionable dashboards and insights that drive key business decisions and KPIs.
Requirements:
8+ years of experience in Business Intelligence, Data Engineering, or Analytics roles, including 4+ years in a leadership capacity.
Proven ability to manage hybrid teams of engineers and analysts with a track record of high-impact project delivery.
Deep technical expertise in SQL, dbt, Python, and modern data platforms (Snowflake, BigQuery, Redshift).
Hands-on experience building data pipelines, warehouse architecture, and analytical models at scale.
Strong stakeholder engagement skills with the ability to translate business needs into data products.
Effective communicator with a strong sense of ownership, clarity, and strategic vision.
BA/BSc in Computer Science, Engineering, Industrial Engineering, or a related field
Nice to have:
Experience with BI tools like Looker, Power BI, Tableau, or Metabase.
Exposure to AI/ML-driven analytics or predictive modeling workflows.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8359489
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Ramat Gan
Job Type: Full Time
Were looking for a hands-on Individual Contributor Data Engineer to design, build, and operate large-scale data products. Youll own mission-critical pipelines and services, balancing pre-computation with on-demand execution to deliver complex, business-critical insights with the right cost, latency, and reliability.
RESPONSIBILITIES:
Design and run Spark data pipelines, orchestrated with Airflow, governed with Unity Catalog.
Build scalable batch and on-demand data products, aiming for the sweet spot between pre-compute and on-demand for complex logic - owning SLAs/SLOs, cost, and performance.
Implement robust data quality, lineage, and observability across pipelines.
Contribute to the architecture and scaling of our Export Center for off-platform report generation and delivery.
Partner with Product, Analytics, and Backend to turn requirements into resilient data systems.
Requirements:
BSc degree in Computer Science or an equivalent
5+ years of professional Backend/Data-Engineering experience
2+ years of Data-Engineering experience
Production experience with Apache Spark, Airflow, Databricks, and Unity Catalog.
Strong SQL and one of Python/Scala; solid data modeling and performance tuning chops.
Proven track record building large-scale (multi-team, multi-tenant) data pipelines and services.
Pragmatic approach to cost/latency trade-offs, caching, and storage formats.
Experience shipping reporting/exports pipelines and integrating with downstream delivery channels.
IC mindset: you lead through design, code, and collaboration (no direct reports).
OTHER REQUIREMENTS:
Delta Lake, query optimization, and workload management experience.
Observability stacks (e.g., metrics, logging, data quality frameworks).
GCS or other major cloud provider experience.
Terraform IAC experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8358829
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll:

Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams.

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights.

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance.

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights.

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions.

Collaborate closely with other Staff Engineers across us to align on cross-organizational initiatives and technical strategies.

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions.

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas.

A B.Sc. in Computer Science or a related technical field (or equivalent experience).

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions.

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines.

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage.

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions.

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases.

Ability to work in an office environment a minimum of 3 days a week.

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8358644
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/09/2025
Location: Petah Tikva
Job Type: Full Time
We are seeking for a data Engineer We are looking for a skilled data Engineer. This role requires 5+ years of experience in data solutions development A full time position, based in Petah Tikva
Key Responsibilities:
Development of data -driven technological solutions based on business needs.
* Experience with ETL tools, data infrastructure, integration processes, and data cleansing.
* Collaboration with analysts, product managers, and business stakeholders.
* Research and implementation of new technologies to improve existing processes.
* Analysis of business requirements and translating them into technological models.
Requirements:
5+ years of experience in developing data solutions required. Proficiency in Python required. Experience working with databases (SQL) required. Business acumen and the ability to translate business needs into technological solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8358149
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure group, youll help build Lemonades data platform for our growing stack of products, customers, and microservices.

We ingest our data from our operational DBs, telematics devices, and more, working with several data types (both structured and unstructured). Our challenge is to provide building tools and infrastructure to empower other teams leveraging data-mesh concepts.

In this role youll:
Help build Lemonades data platform, designing and implementing data solutions for all application requirements in a distributed microservices environment.

Build data-platform ingestion layers using streaming ETLs and Change Data Capture.

Implement pipelines and scheduling infrastructures.

Ensure compliance, data-quality monitoring, and data governance on Lemonades data platform.

Implement large-scale batch and streaming pipelines with data processing frameworks.

Collaborate with other Data Engineers, Developers, BI Engineers, ML Engineers, Data Scientists, Analysts and Product managers.

Share knowledge with other team members and promote engineering standards.
Requirements:
5+ years of prior experience as a data engineer or data infra engineer.

B.S. in Computer Science or equivalent field of study.

Knowledge of databases (SQL, NoSQL).

Proven success in building large-scale data infrastructures such as Change Data Capture, and leveraging open source solutions such as Airflow & DBT, building large-scale streaming pipelines, and building customer data platforms.

Experience with Python, Pulumi\Terraform, Apache Spark, Snowflake, AWS, K8s, Kafka.

Ability to work in an office environment a minimum of 3 days a week.

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8354756
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and Part Time
Were looking for an experienced Data Engineer to join our DataWarehouse team in TLV.

In this role, you will play a pivotal role in the Data Platform organization, leading the design, development, and maintenance of our data warehouse. In your day-to-day, youll work on data models and Backend BI solutions that empower stakeholders across the company and contribute to informed decision-making processes all while leveraging your extensive experience in business intelligence.

This is an excellent opportunity to be part of establishing Lemonades state-of-the-art data stack, implementing cutting-edge technologies in a cloud environment.

In this role youll:

Lead the design and development of scalable and efficient data warehouse and BI solutions that align with organizational goals and requirements.

Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs.

Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into the semantic layer.

Develop processes to enforce schema evaluation, cover anomaly detection, and monitor data completeness and freshness.

Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency.

Implement best practices for data warehouse and database performance tuning.

Conduct thorough testing of data applications and implement robust validation processes.

Collaborate with Data Infra Engineers, Developers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers.
Requirements:
3+ years of experience as a BI Engineer or Data Engineer.

Proficiency in data modeling, ELT development, and DWH methodologies.

SQL expertise and experience working with Snowflake or similar technologies.

Prior experience working with DBT.

Experience with Python and software development, an advantage.

Excellent communication and collaboration skills.

Ability to work in an office environment a minimum of 3 days a week.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8354743
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/09/2025
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.

Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.

Your Arena:
Design, develop, and maintain scalable, robust data pipelines and ETL processes.
Architect and implement complex data models across various storage solutions.
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions.
Ensure data quality, consistency, security, and compliance across all data systems.
Play a key role in defining and implementing data strategies that drive business value.
Contribute to the continuous improvement of our data architecture and processes.
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges.
Participate in and sometimes lead code reviews to maintain high coding standards.
Troubleshoot and resolve complex data-related issues in production environments.
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
What It Takes - Must haves:
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must.
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must.
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must.
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must.
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must.
Experience with data validation tools such as Pydantic & Great Expectations - Must.
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must.

Advantages:
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage.
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage.
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage.
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage.
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage.
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage.
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams.
Excellent communication skills with the ability to explain complex technical concepts to various audiences.
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8353703
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
17/09/2025
Location: Ra'anana
Job Type: Full Time
The Data & AI Division within Microsoft Activity at abra is hiring a Data Engineer. In this role, you will be a key contributor, building and maintaining advanced data infrastructure that supports critical projects across the organization. You’ll work closely with IT and development teams to design, implement, and optimize scalable data solutions. What you’ll do:
* Develop and maintain ETL/ELT pipelines from multiple data sources.
* Work with SQL Server, SSIS, Azure Data Services (ADF, Synapse, Data Lake), and other cloud platforms.
* Design, build, and optimize Data Warehouses (DWH) and data models.
* Collaborate with teams across IT and development to deliver reliable and scalable data solutions.
Requirements:
What we’re looking for:
* Up to 3 years of experience as a Data Engineer or similar role.
* Experience with cloud platforms such as Azure, GCP, AWS, or similar.
* Strong experience with relational databases (SQL Server, T-SQL), ETL tools (SSIS, ADF), Data Warehousing (DWH), and data modeling.
* Knowledge of scripting/programming languages such as Python, PowerShell, or .NET is a plus.
* Understanding of data governance, data quality, and performance optimization.
* Analytical mindset and team collaboration skills.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8100781
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו