דרושים » תוכנה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
חברה חסויה
Location: Merkaz
Job Type: Full Time
we are looking for a Data engineer position - part of the iTero data team.
Key Responsibilities:
Collect and arrange data from various sources
Automate workflows and improve performance of existing data processes
Implement data quality checks, monitoring, and alerting to ensure accuracy and reliability
Optimize performance and costs of our data platform
Ensure compliance with data governance, security, and privacy standard
Requirements:
4+ years of industry experience as a data engineer
Hands-on experience with Spark
Data driven with understanding of business processes and logics
Experienced coding with python or Scala​
Experience working with different ETL / ELT tools
Advantage
Knowledge of working with Databricks platform and services
Strong SQL skills
Familiar with AWS environment and services
Familiar with CICD processes​
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8505898
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We're seeking an outstanding and passionate Data Platform Engineer to join our growing R&D team.

You will work in an energetic startup environment following Agile concepts and methodologies. Joining the company at this unique and exciting stage in our growth journey creates an exceptional opportunity to take part in shaping our data infrastructure at the forefront of Fintech and AI.

What you'll do:

Design, build, and maintain scalable data pipelines and ETL processes for our financial data platform.

Develop and optimize data infrastructure to support real-time analytics and reporting.

Implement data governance, security, and privacy controls to ensure data quality and compliance.

Create and maintain documentation for data platforms and processes.

Collaborate with data scientists and analysts to deliver actionable insights to our customers.

Troubleshoot and resolve data infrastructure issues efficiently.

Monitor system performance and implement optimizations.

Stay current with emerging technologies and implement innovative solutions.
Requirements:
What you'll bring:

3+ years experience in data engineering or platform engineering roles.

Strong programming skills in Python and SQL.

Experience with orchestration platforms like Airflow/Dagster/Temporal.

Experience with MPPs like Snowflake/Redshift/Databricks.

Hands-on experience with cloud platforms (AWS) and their data services.

Understanding of data modeling, data warehousing, and data lake concepts.

Ability to optimize data infrastructure for performance and reliability.

Experience working with containerization (Docker) in Kubernetes environments.

Familiarity with CI/CD concepts.

Fluent in English, both written and verbal.

And it would be great if you have (optional):

Experience with big data processing frameworks (Apache Spark, Hadoop).

Experience with stream processing technologies (Flink, Kafka, Kinesis).

Knowledge of infrastructure as code (Terraform).

Experience building analytics platforms.

Experience building clickstream pipelines.

Familiarity with machine learning workflows and MLOps.

Experience working in a startup environment or fintech industry.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8509718
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location:
Job Type: Full Time
we are looking for a Data Engineer.
What youll do:
Design, build, and optimize large-scale data pipelines and workflows for both batch and real-time processing.
Architect and maintain Airflow-based orchestration frameworks to manage complex data dependencies and data movement.
Develop high-quality, maintainable data transformation and integration processes across diverse data sources and domains.
Lead the design and implementation of scalable, cloud-based data infrastructure ensuring reliability, performance, and cost efficiency.
Drive data modeling and data architecture practices to ensure consistency, reusability, and quality across systems.
Collaborate closely with Product, R&D, BizDev, and Data Science teams to define data requirements, integrations, and delivery models.
Own the technical roadmap for key data initiatives, from design to production deployment.
Requirements:
6+ years of experience as a Data Engineer working on large-scale, production-grade systems.
Proven experience architecting and implementing data pipelines and workflows in Airflow - must be hands-on and design-level proficient.
Strong experience with real-time or streaming data processing (Kafka, Event Hubs, Kinesis, or similar).
Advanced proficiency in Python for data processing and automation.
Strong SQL skills and deep understanding of data modeling, ETL/ELT frameworks, and DWH methodologies.
Experience with cloud-based data ecosystems (Azure, AWS, or GCP) and related services (e.g., Snowflake, BigQuery, Redshift).
Experience with Docker, Kubernetes, and modern CI/CD practices.
Excellent communication and collaboration skills with experience working across multiple stakeholders and business units.
A proactive, ownership-driven approach with the ability to lead complex projects end-to-end.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8473165
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/01/2026
חברה חסויה
Location: Netanya
Job Type: Full Time
DRS RADA is a leading defense high-tech company specializing in radar systems development. We are seeking an experienced Data Engineer to join our data engineering team. In this role, you will play a crucial part in designing, developing, and maintaining scalable data pipelines and infrastructure to support our AI department. This is an opportunity to work with cutting-edge technologies in a fast-paced production environment, driving impactful, data-driven solutions for the business. Key Responsibilities:
* Design, develop, and optimize ETL/ELT pipelines for large-scale data processing.
* Work with a modern data stack, including Databricks (Spark, SQL), Apache Airflow, and Azure services.
* Troubleshoot and optimize queries and jobs for performance improvements.
* Implement best practices for data governance, security, and monitoring.
* Stay updated with industry trends and emerging technologies in data engineering. If you're passionate about building scalable data solutions and thrive in a fast-paced environment, we’d love to hear from you!
Requirements:
* 4+ years of experience in data engineering or related fields.
* Proficiency in Python for data processing and automation (mandatory).
* Deep understanding of Apache Spark and Databricks for big data processing (mandatory).
* Experience with Git (mandatory).
* Expertise in Apache Airflow for workflow orchestration.
* Familiarity with cloud-based environments, particularly Azure.
* Advanced proficiency in SQL and query optimization.
* Familiarity with data modeling, ETL/ELT principles, and performance tuning.
* Knowledge of CI/CD and containerization (Docker).
* An enthusiastic, fast-learning, team-oriented, and motivated individual who loves working with data.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8500308
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Job Type: Full Time
We use cutting-edge innovations in financial technology to bring leading data and features that allow individuals to be qualified instantly, making purchases at the point-of-sale fast, fair and easy for consumers from all walks of life.
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities
Design and build data solutions that support our companys core business goals, from enabling capital market transactions (loan sales and securitization) to providing
reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481603
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a DevOps Engineer (Data Platform Group).
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store.
DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines - Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8509784
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
23/12/2025
חברה חסויה
Location: Herzliya
Job Type: Full Time and Hybrid work
we are looking for a Data Engineer
We work in a flexible, hybrid model, so you can choose the home-office balance that works best for you.
Responsibilities
Design, build, and maintain scalable ETL/ELT pipelines to integrate data from diverse sources, optimizing for performance and cost efficiency.
Leverage Databricks and other modern data platforms to manage, transform, and process data
Collaborate with software teams to understand data needs and ensure data solutions meet business requirements.
Optimize data processing workflows for performance and scalability.
Requirements:
3+ years of experience in Data Engineering, including cloud-based data solutions.
Proven expertise in implementing large-scale data solutions.
Proficiency in Python, PySpark.
Experience with ETL / ELT processes.
Experience with cloud and technologies such as Databricks (Apache Spark).
Strong analytical and problem-solving skills, with the ability to evaluate and interpret complex data.
Experience leading and designing data solutions end-to-end, integrating with multiple teams, and driving tasks to completion.
Advantages
Familiarity with either On-premise or Cloud storage systems
Excellent communication and collaboration skills, with the ability to work effectively in a multidisciplinary team.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8470043
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
Were looking for a highly skilled and motivated Data Engineer to join the Resolve (formerly DevOcean) team .
In this role, youll be responsible for designing, building, and optimizing the data infrastructure that powers our SaaS platform.
Youll play a key role in shaping a cost-efficient and scalable data architecture while building robust data pipelines that serve analytics, search, and reporting needs across the organization.
Youll work closely with our backend, product, and analytics teams to ensure our data layer remains fast, reliable, and future-proof. This is an opportunity to influence the evolution of our data strategy and help scale a cybersecurity platform that processes millions of findings across complex customer environments
Roles and Responsibilities:
Design, implement, and maintain data pipelines to support ingestion, transformation, and analytics workloads.
Collaborate with engineers to optimize MongoDB data models and identify opportunities for offloading workloads to analytical stores (ClickHouse, DuckDB, etc.).
Build scalable ETL/ELT workflows to consolidate and enrich data from multiple sources.
Develop data services and APIs that enable efficient querying and aggregation across large multi-tenant datasets.
Partner with backend and product teams to define data retention, indexing, and partitioning strategies to reduce cost and improve performance.
Ensure data quality, consistency, and observability through validation, monitoring, and automated testing.
Contribute to architectural discussions and help define the long-term data platform vision.
Requirements:
8+ years of experience as a Data Engineer or Backend Engineer working in a SaaS or data-intensive environment.
Strong proficiency in Python and experience with data processing frameworks (e.g., Pandas, PySpark, Airflow, or equivalent).
Deep understanding of data modeling and query optimization in NoSQL and SQL databases (MongoDB, PostgreSQL, etc.).
Hands-on experience building ETL/ELT pipelines and integrating multiple data sources.
Familiarity with OTF technologies and analytical databases such as ClickHouse, DuckDB and their role in cost-efficient analytics.
Experience working in cloud environments (AWS preferred) and using native data services (e.g., Lambda, S3, Glue, Athena).
Strong understanding of data performance, storage optimization, and scalability best practices.
Excellent problem-solving skills and a proactive approach to performance and cost optimization.
Strong collaboration and communication abilities within cross-functional teams.
Passion for continuous learning and exploring modern data architectures.
Nice to Have:
Experience with streaming or CDC pipelines (e.g., Kafka, Debezium).
Familiarity with cloud security best practices and data governance.
Exposure to multi-tenant SaaS architectures and large-scale telemetry data.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8486352
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/01/2026
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are seeking a Senior Backend & Data Engineer to join its SaaS Data Platform team.
This role offers a unique opportunity to design and build large-scale, high-performance data platforms and backend services that power our cloud-based products.
You will own features end to end-from architecture and design through development and production deployment-while working closely with Data Science, Machine Learning, DevOps, and Product teams.
Key Responsibilities:
Design, develop, and maintain scalable, secure backend services and data platforms on AWS
Build and operate batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS
Develop backend components and data processing workflows in a cloud-native environment
Optimize performance, reliability, and observability of data pipelines and backend services
Collaborate with ML, backend, DevOps, and product teams to deliver data-driven solutions
Lead best practices in code quality, architecture, and technical excellence
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Requirements:
8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
Proven ability to design and implement scalable backend services and data pipelines
Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
Experience with Infrastructure as Code and automated deployment of data infrastructure
Strong debugging, testing, and performance-tuning skills in agile environments
High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer)
Experience with ML pipelines or AI-driven analytics
Familiarity with data governance, self-service data platforms, or data mesh architectures
Experience with PostgreSQL, DynamoDB, MongoDB
Experience building or consuming high-scale APIs
Background in multi-threaded or distributed system development
Domain experience in cybersecurity, law enforcement, or other regulated industries.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8482582
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Rosh Haayin
Job Type: Full Time
The ideal candidate is not afraid of data in any form or scale, and is experienced with cloud services to ingest, stream, store, and manipulate data. The Data Engineer will support new system designs and migrate existing ones, working closely with solutions architects, project managers, and data scientists. The candidate must be self-directed, a fast learner, and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or re-designing our customers data architecture to support their next generation of products, data initiatives, and machine learning systems.





Summary of Key Responsibilities



To meet compliance and regulatory requirements, keep our customers data separated and secure.
Design, Build, and operate the infrastructure required for optimal data extraction, transformation, and loading from a wide variety of data sources using SQL, cloud migration tools, and big data technologies.
Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operational problems.
Design, build, and operate large, complex data lakes that meet functional / non-functional business requirements.
Optimize various data types' ingestion, storage, processing, and retrieval, from near real-time events and IoT to unstructured data such as images, audio, video, documents, and in between.
Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
Requirements:
5+ years of experience in a Data Engineer role in a cloud native ecosystem.
3+ years of experience in AWS Data Services (mandatory)
Bachelor's (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field.
Working experience with the following technologies/tools:
big data tools: Spark, ElasticSearch, Kafka, Kinesis etc.
Relational SQL and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra.
Functional and scripting languages: Python, Java, Scala, etc.
Advanced SQL
Experience building and optimizing big data pipelines, architectures and data sets.
Working knowledge of message queuing, stream processing, and highly scalable big data stores.
Experience supporting and working with external customers in a dynamic environment.
Articulate with great communication and presentation skills
Team player who can train as well as learn from others.
Fluency in Hebrew and English is essential
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8468083
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced and visionary Data Platform Engineer to help design, build and scale our BI platform from the ground up.
In this role, you will be responsible for building the foundations of our data analytics platform - enabling scalable data pipelines and robust data modeling to support real-time and batch analytics, ML models and business insights that serve both business intelligence and product needs.
You will be part of the R&D team, collaborating closely with engineers, analysts, and product managers to deliver a modern data architecture that supports internal dashboards and future-facing operational analytics.
If you enjoy architecting from scratch, turning raw data into powerful insights, and owning the full data lifecycle - this role is for you!
Responsibilities
Take full ownership of the design and implementation of a scalable and efficient BI data infrastructure, ensuring high performance, reliability and security.
Lead the design and architecture of the data platform - from integration to transformation, modeling, storage, and access.
Build and maintain ETL/ELT pipelines, batch and real-time, to support analytics, reporting, and product integrations.
Establish and enforce best practices for data quality, lineage, observability, and governance to ensure accuracy and consistency.
Integrate modern tools and frameworks such as Airflow, dbt, Databricks, Power BI, and streaming platforms.
Collaborate cross-functionally with product, engineering, and analytics teams to translate business needs into data infrastructure.
Promote a data-driven culture - be an advocate for data-driven decision-making across the company by empowering stakeholders with reliable and self-service data access.
Requirements:
5+ years of hands-on experience in data engineering and in building data products for analytics and business intelligence.
Strong hands-on experience with ETL orchestration tools (Apache Airflow), and data lakehouses (e.g., Snowflake/BigQuery/Databricks)
Vast knowledge in both batch processing and streaming processing (e.g., Kafka, Spark Streaming).
Proficiency in Python, SQL, and cloud data engineering environments (AWS, Azure, or GCP).
Familiarity with data visualization tools ( Power BI, Looker, or similar.
BSc in Computer Science or a related field from a leading university
Nice to have
Experience working in early-stage projects, building data systems from scratch.
Background in building operational analytics pipelines, in which analytical data feeds real-time product business logic.
Hands-on experience with ML model training pipelines.
Experience in cost optimization in modern cloud environments.
Knowledge of data governance principles, compliance, and security best practices.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8482840
סגור
שירות זה פתוח ללקוחות VIP בלבד