דרושים » הנדסה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

Responsibilities
Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
Apply best practices for data security, integrity, and performance across all systems.
Requirements:
4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
Proven track record in designing, developing, and deploying complex data applications.
Hands-on experience with orchestration and processing tools such as Apache Airflow and Apache Spark.
Bachelors degree in Computer Science, Information Technology, or a related field or equivalent practical experience.
Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
Excellent communication skills and a strong team player, capable of working cross-functionally.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8214883
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
What You'll Do:
Shape the Future of Data- Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
Bonus points:
Hands-on experience with our stack: Databricks, Delta Lake, Kafka, Docker, Airflow, Terraform, and AWS.
Experience in building self-serve data platforms and improving developer experience across the organization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8204175
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: More than one
Were growing and looking to hire a Data Team Leader who embodies our core values: People First, Customer Obsession, Strive for Excellence, and Integrity.
We are seeking a Data Team Leader with strong technical expertise and a flair for innovation to join our dynamic Data Science and Analytics group. You will play a key role in advancing solution by crafting and sustaining the data collection infrastructures, including cutting-edge LLM data pipelines and web collection frameworks, along with data alliances.
This role requires a visionary leader who is both hands-on and proactive, able to collaborate closely with analysts, data scientists, and product teams to devise scalable solutions. Your mission will be to ensure data reliability and to drive the creation and maintenance of high-scale data collection systems, manage data alliances, and uphold governance within our data collection infrastructure.
Responsibilities:
As Data Team Leader, Your impact will be:
Leading and scaling a team of talented LLM engineers, network analysts, and web scrapers to drive core data initiatives.
Fostering a culture of innovationpromoting continuous learning and high engineering standards.
Defining data collection strategies aligned with business KPIs and ensuring measurable impact.
Maintaining and evolving our data infrastructure, with a strong focus on automation and efficiency.
Working closely with backend developers, PMs, data scientists, analysts, and external data partners to design and deliver robust data solutions.
Integrating and managing data flows across various sources and systems.
Taking ownership of data alliances, budgets, and overall data strategy.
Requirements:
2+ years of proven experience leading data teams.
4+ years of hands-on experience as a data engineer or analyst in cloud-native environmentsideally in high-growth SaaS or cybersecurity companies.
Proven expertise in LLM prompt engineering and evaluation.
Strong proficiency in SQL, Python, and large-scale distributed data processing.
Track record in building and monitoring scalable data pipelines.
Hands-on experience with Databricks or similar big-data platforms.
A collaborative mindset and strong ability to work cross-functionally with data science, engineering, and analytics teams.
Excellent communication skills, oral and written. English-must
Willing and able to meet challenges head-on, solve problems independently and make things happen
Open-minded, flexible, and thrive in a highly dynamic, fast-paced, ever-changing environment
Preferred Qualifications:
Experience running ML and LLM pipelines in production.
Background in leading data alliances and partnerships.
Experience managing data workflow budgets and driving cost-effective solutions.
Familiarity with LLM orchestration tools like n8n, LangChain, or equivalents.
Strong background in web scraping techniques and tools.
Exposure to cybersecurity data or cyber-physical systemsstrong plus.
Bachelors or Masters in Computer Science, Engineering, or a related technical field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8198510
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our companys data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives our company toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
In this role youll
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance
Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across our company to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas
A B.Sc. in Computer Science or a related technical field (or equivalent experience)
Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions
Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines
A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage
Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions
Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases
Ability to work in an office environment a minimum of 3 days a week
Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8206357
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineer.
The Data Engineer position is a central role in the Tech Org. The Data Engineering (DE) team is a Change Agent Team that plays a significant role in the ongoing (at advanced stages) migration to cloud technologies. The ideal candidate is a senior data engineer with a strong technical background in data infrastructure, data architecture design and robust data pipes building. The candidate must also have collaborative abilities to interact effectively with Product managers, Data scientists, Onboarding engineers, and Support staff.
Responsibilities:
Deploy and maintain critical data pipelines in production.
Drive strategic technological initiatives and long-term plans from initial exploration and POC to going live in a hectic production environment.
Design infrastructural data services, coordinating with the Architecture team, R&D teams, Data Scientists, and product managers to build scalable data solutions.
Work in Agile process with Product Managers and other tech teams.
End-to-end responsibility for the development of data crunching and manipulation processes within the product.
Design and implement data pipelines and data marts.
Create data tools for various teams (e.g., onboarding teams) that assist them in building, testing, and optimizing the delivery of the product.
Explore and implement new data technologies to support data infrastructure.
Work closely with the core data science team to implement and maintain ML features and tools.
Requirements:
B.Sc. in Computer Science or equivalent.
4+ years of extensive SQL experience (preferably working in a production environment)
Experience with programming languages (preferably, Python) a must!
Experience working with Snowflake or with Google BigQuery
Experience with "Big Data" environments, tools, and data modeling (preferably in a production environment).
Strong capability in schema design and data modeling.
Understanding of micro-services architecture.
Working closely with BI developers
Quick, self-learning, and good problem-solving capabilities.
Good communication skills and collaboration.
Process and detailed oriented.
Passion for solving complex data problems.
Desired:
Familiarity with Airflow, ETL tools, and MSSQL.
Experience with GCP services.
Experience with Docker and Kubernetes.
Experience with PubSub/Kafka.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8196336
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
20/05/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture.
You must be self-directed and comfortable supporting the data needs of multiple systems and products.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.

This is a unique opportunity to join a team full of outstanding people making a big impact.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.

This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.

Key Responsibilities
Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
Create data infrastructure for various R&D groups across the organization to support product development and optimization
Work with data experts to assist with technical data-related issues and support infrastructure needs
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
Requirements:
B.Sc. in Engineering or a related field
3 - 10 years of experience as a Data Engineer working on production systems
Advanced SQL knowledge and experience with relational databases
Proven experience using Python
Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
Experience in creating and maintaining ETL/ELT processes
Strong project management and organizational skills
Strong collaboration skills with both technical (R&D) and non-technical (business) teams
Advantage: experience with Azure services such as Storage Accounts, Databricks, EventHub, and Spark
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8184909
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
6 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
If you share our love of sports and tech, you've got the passion and will to better the sports-tech and data industries - join the team. We are looking for a Data & AI Architect.
Responsibilities:
Build the foundations of modern data architecture, supporting real-time, high-scale (Big Data) sports data pipelines and ML/AI use cases, including Generative AI.
Map the companys data needs and lead the selection and implementation of key technologies across the stack: data lakes (e.g., Iceberg), databases, ETL/ELT tools, orchestrators, data quality and observability frameworks, and statistical/ML tools.
Design and build a cloud-native, cost-efficient, and scalable data infrastructure from scratch, capable of supporting rapid growth, high concurrency, and low-latency SLAs (e.g., 1-second delivery).
Lead design reviews and provide architectural guidance for all data solutions, including data engineering, analytics, and ML/data science workflows.
Set high standards for data quality, integrity, and observability. Design and implement processes and tools to monitor and proactively address issues like missing events, data delays, or integrity failures.
Collaborate cross-functionally with other architects, R&D, product, and innovation teams to ensure alignment between infrastructure, product goals, and real-world constraints.
Mentor engineers and promote best practices around data modeling, storage, streaming, and observability.
Stay up-to-date with industry trends, evaluate emerging data technologies, and lead POCs to assess new tools and frameworks especially in the domains of Big Data architecture, ML infrastructure, and Generative AI platforms.
Requirements:
At least 10 years of experience in a data engineering role, including 2+ years as a data & AI architect with ownership over company-wide architecture decisions.
Proven experience designing and implementing large-scale, Big Data infrastructure from scratch in a cloud-native environment (GCP preferred).
Excellent proficiency in data modeling, including conceptual, logical, and physical modeling for both analytical and real-time use cases.
Strong hands-on experience with:
Data lake and/or warehouse technologies, with Apache Iceberg experience required (e.g., Iceberg, Delta Lake, BigQuery, ClickHouse)
ETL/ELT frameworks and orchestrators (e.g., Airflow, dbt, Dagster)
Real-time streaming technologies (e.g., Kafka, Pub/Sub)
Data observability and quality monitoring solutions
Excellent proficiency in SQL, and in either Python or JavaScript.
Experience designing efficient data extraction and ingestion processes from multiple sources and handling large-scale, high-volume datasets.
Demonstrated ability to build and maintain infrastructure optimized for performance, uptime, and cost, with awareness of AI/ML infrastructure requirements.
Experience working with ML pipelines and AI-enabled data workflows, including support for Generative AI initiatives (e.g., content generation, vector search, model training pipelines) or strong motivation to learn and lead in this space.
Excellent communication skills in English, with the ability to clearly document and explain architectural decisions to technical and non-technical audiences.
Fast learner with strong multitasking abilities; capable of managing several cross-functional initiatives simultaneously.
Willingness to work on-site in Ashkelon once a week.
Advantage:
Experience leading POCs and tool selection processes.
Familiarity with Databricks, LLM pipelines, or vector databases is a strong plus.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8208147
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
3 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for brilliant data engineers to join our rapidly growing team.

You will join a team consisting of senior software and data engineers that drive our data platform from data acquisition, to processing and enrichment and all the way to business insights. You will join an early stage team and company and will have a major impact on the decisions and architecture of our various products.

Responsibilities
Design and build data acquisition pipelines that acquire, clean, and structure large datasets to form the basis of our data platform and IP
Design and build data pipelines integrating many different data sources and forms
Define architecture, evaluate tools and open source projects to use within our environment
Develop and maintain features in production to serve our customers
Collaborate with product managers, data scientists, data analysts and full-stack engineers to deliver our product to top tier retail customers
Take a leading part in the development of our enterprise-grade technology platform and ecosystem
Harmonize and clean large datasets from different sources
Requirements:
At least 6 years of experience in software engineering in Python or an equivalent language
At least 3 years of experience with data engineering products from early-stage concept to production rollouts
Experience with cloud platforms (GCP, AWS, Azure), working on production payloads at large scale and complexity
Hands-on experience with data pipeline building and tools (Luigi, Airflow etc), specifically on cloud infrastructure
Advantage: Hands-on experience with relevant data analysis tools (Jupyter notebooks, Anaconda etc)
Advantage: Hands-on experience with data science tools, packages, and frameworks
Advantage: Hands-on experience with ETL Flows
Advantage: Hands-on experience with Docker / Kubernetes
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8214879
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We seek an experienced Software Engineer with a strong background to become an integral member of our Data-Core team, tasked with the mission of processing, structuring, and analyzing hundreds of millions of data sources.
Your role will be pivotal in creating a unified, up-to-date, and accurate utilities map, services, and applications for accelerating our mapping operations. Your contributions will directly impact our core product's success.
Responsibilities:
Collaborate with cross-functional teams to design, build, and maintain data processing pipelines while contributing to our common codebase.
Contribute to designing and implementing data architecture, ensuring effective data storage and retrieval.
Develop and optimize complex Python-based applications and services to allow more efficient data processing and orchestration, enhancing the quality and accuracy of our datasets.
Implement geospatial data processing techniques and contribute to the creation of our unified utilities map, enhancing the product's geospatial features.
Drive the scalability and performance optimization of data systems, addressing infrastructure challenges as data volume and complexity grow.
Create and manage data infrastructure components, including ETL workflows, data warehouses and databases, supporting seamless data flow and accessibility.
Design and implement CI/CD processes for data processing, model training, releasing, testing and monitoring, ensuring robustness and consistency.
Requirements:
5+ years of proven experience as a backend/software engineer with a strong Python background.
Experience in deploying a diverse range of cloud-based technologies to support mission-critical projects, including expertise in writing, testing, and deploying code within a Kubernetes environment.
A proven experience in building scalable online services.
Experience with frameworks like Airflow, Docker, and K8S to build data processing and exploration pipelines along with ML infrastructure to power our intelligence.
Experience in AWS/Google cloud environments.
Experience working with both SQL and NoSQL databases such as Postgres, MySQL, Redis, or DynamoDB.
Experience as a Data Infrastructure Engineer or in a similar role in managing and processing large-scale datasets - a significant advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8195498
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/05/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Backend Data Engineer to design and optimize high-performance infrastructure capable of handling massive data volumes. In this role, youll lead backend development, architect scalable data pipelines, and ensure seamless data processing. Your expertise in distributed systems and performance optimization will drive innovation, transforming intricate security challenges into efficient, resilient solutions.
Responsibilities:
Be a significant part of the development of backend infrastructure to efficiently handle, process, and store vast volumes of data.
Architect and build a scalable, high-performance backend system that supports various services within the platform.
Translate intricate requirements into meticulous backend design plans, maintaining a focus on software design, code quality, and performance.
Collaborate with cross-functional teams to implement backend and data-handling techniques.
Apply your expertise to create robust backend solutions.
Leverage your proficiency in cloud platforms such as AWS, GCP, or Azure to drive strong backend engineering practices.
Demonstrate strong debugging skills, identifying issues such as race conditions and memory leaks within the backend system. Solve complex backend problems with an analytical mindset and contribute to a positive team dynamic.
Bring your excellent interpersonal skills to foster collaboration and maintain a positive attitude within the team.
Requirements:
5+ years of experience in server-side development using Java, Python, Go, or .NET.
Strong background in microservices architecture and related tools (Docker, Kubernetes, etc.).
Hands-on experience with large-scale applications, handling high data volumes and intensive traffic.
Proficiency with various database technologies such as MySQL, Cassandra, Neo4J, Google BigQuery, Amazon Redshift, Elasticsearch, and PostgreSQL.
Solid understanding of message queuing, stream processing, and scalable big data storage solutions.
Experience in building and optimizing data pipelines and analytics workflows.
Familiarity with streaming technologies such as Amazon Kinesis and Apache Kafka.
Proven ability to bootstrap projects and develop systems from the ground up.
Strong ownership and leadership skills, with a track record of driving initiatives forward.
Advantages:
Experience in cybersecurity.
Hands-on expertise in Go development.
Familiarity with graph databases and data modeling.
Experience with data warehouse technologies like Snowflake and Databricks.
Knowledge of Big Data ecosystems, including Hive, Hadoop, or Spark.
Background in startup or small-company environments, thriving in fast-paced, dynamic settings.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8162577
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure group, youll help build our companys data platform for our growing stack of products, customers, and microservices.
We ingest our data from our operational DBs, telematics devices, and more, working with several data types (both structured and unstructured). Our challenge is to provide building tools and infrastructure to empower other teams leveraging data-mesh concepts.
In this role youll
Help build our companys data platform, designing and implementing data solutions for all application requirements in a distributed microservices environment
Build data-platform ingestion layers using streaming ETLs and Change Data Capture
Implement pipelines and scheduling infrastructures
Ensure compliance, data-quality monitoring, and data governance on our companys data platform
Implement large-scale batch and streaming pipelines with data processing frameworks
Collaborate with other Data Engineers, Developers, BI Engineers, ML Engineers, Data Scientists, Analysts and Product managers
Share knowledge with other team members and promote engineering standards.
Requirements:
5+ years of prior experience as a data engineer or data infra engineer
B.S. in Computer Science or equivalent field of study
Knowledge of databases (SQL, NoSQL)
Proven success in building large-scale data infrastructures such as Change Data Capture, and leveraging open source solutions such as Airflow & DBT, building large-scale streaming pipelines, and building customer data platforms
Experience with Python, Pulumi\Terraform, Apache Spark, Snowflake, AWS, K8s, Kafka
Ability to work in an office environment a minimum of 3 days a week
Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8206372
סגור
שירות זה פתוח ללקוחות VIP בלבד