דרושים » דאטה » Data Engineer - Temporary Position

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
מיקום המשרה: ירושלים
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
Location: Jerusalem
Job Type: Full Time
We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Design, develop, and maintain scalable, robust data pipelines and ETL processes
Architect and implement complex data models across various storage solutions
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
Ensure data quality, consistency, security, and compliance across all data systems
Play a key role in defining and implementing data strategies that drive business value
Contribute to the continuous improvement of our data architecture and processes
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
Participate in and sometimes lead code reviews to maintain high coding standards
Troubleshoot and resolve complex data-related issues in production environments
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements:
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
Extensive experience with GraphDB - MUST
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
Experience with data validation tools such as Pydantic & Great Expectations - Must
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must
Nice-to-Haves
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams
Excellent communication skills with the ability to explain complex technical concepts to various audiences
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531324
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
26/02/2026
חברה חסויה
Location: Jerusalem
Job Type: Full Time
We're looking for a Data Engineer with 4+ year experience to join our Data Engineering team and help us build and scale our production-grade data platform. You'll work on high-performance systems built on self-hosted ClickHouse, optimize complex data pipelines, and collaborate closely with Product, Analytics, and Infrastructure teams to deliver reliable, fast, and scalable data solutions.

This is a hands-on technical role where you'll have a significant impact on how we ingest, model, store, and serve data that powers our analytics and AI-driven products.
Youll play a key role in shaping the direction of our data platform and have meaningful ownership over critical components of our architecture.

What You'll Do:
Data Modeling & Architecture
Design and evolve data models that reflect business logic and support analytical use cases
Collaborate with the BI and Analytics teams to understand data requirements and translate them into efficient schemas
Performance Optimization
Optimize ClickHouse schemas, partitioning strategies, indexing, and compression
Profile and tune slow queries to improve performance and reduce costs
Implement systems that ensure data quality, consistency, and operational efficiency (e.g., deduplication, validation, anomaly detection)
Monitor pipeline health, data freshness, and query performance with appropriate alerting mechanisms
SQL Compiler Development
Develop and maintain the SQL Compiler layer that translates high-level queries into optimized ClickHouse execution plansImplement query optimization and rewriting strategies to improve performanceDebug and resolve compiler issues to ensure accurate and efficient query translation

Data Pipeline Development & Collaboration
Review and advise the Integration team on pipeline architecture, performance, and best practices.
Provide guidance on data modeling, schema design, and optimization for new data sources.
Troubleshoot and maintain existing pipelines when issues arise or optimization is needed
Ensure data freshness, reliability, and quality across all ingestion pipelines.
Collaboration & Support
Work closely with the Integration team to ensure smooth data ingestion from new sources.
Partner with Infrastructure to support high availability and disaster recovery
Support other teams across the company in accessing and using data effectively.
Requirements:
Excellent communication and collaboration skills
English at a high level, written and spoken required
Ability to work from our Jerusalem office (located in the Central Bus Station next to the train) 2 times a week (Monday & Wednesday) is required
Strong attention to detail, ownership mentality, and ability to work independently
Quick learner who can dive into new codebases, technologies, and systems independently
Hands-on mentality - not afraid to roll up your sleeves, dig into unfamiliar code, and work across the stack (including backend when needed)
4+ years of experience as a Data Engineer
Strong problem-solving skills for complex data challenges at scale - ability to debug performance issues, data inconsistencies, and system bottlenecks in high-volume environments
Experience with data modeling and schema design for analytical workloads
Strong proficiency in SQL and experience with complex analytical queries
Hands-on experience building and maintaining data pipelines (ETL/ELT)
Ability to troubleshoot and optimize systems handling large data volumes (millions+ rows, complex queries, high throughput)
Knowledge of query optimization techniques and execution planning
Familiarity with columnar databases (ClickHouse, BigQuery, Redshift, Snowflake, or similar). Columnar DB experience is a big plus.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563430
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/02/2026
חברה חסויה
Location: Jerusalem
Job Type: Full Time
We are seeking an experienced Senior Data Platform Engineer to design and scale the robust, cost-efficient infrastructure powering our groundbreaking fraud prevention solution. In this role, you will architect distributed systems and cloud-native technologies to safeguard our clients' revenue while driving technical initiatives that align with business objectives and operational efficiency.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Infrastructure & FinOps: Design scalable, robust backend services while owning cloud cost management to ensure maximum resource efficiency.
High-Performance Engineering: Architect distributed systems and real-time pipelines capable of processing millions of daily transactions.
Operational Excellence: Champion Infrastructure-as-Code (IaC), security, and observability best practices across the R&D organization.
Leadership: Lead technical initiatives, mentor engineers, and drive cross-functional collaboration to solve complex infrastructure challenges.
Requirements:
Experience: 5+ years of experience in data platform engineering, backend engineering, or infrastructure engineering.
Language Proficiency: Specific, strong proficiency in Python & software engineering principles.
Cloud Native: Extensive experience with AWS, GCP, or Azure and cloud-native architectures.
Databases: Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases, including performance optimization, cost tuning, and scaling strategies.
Architecture: Strong experience designing and implementing RESTful APIs, microservices architecture, and event-driven systems.
Containerization & IaC: Experience with containerization technologies (Docker, Kubernetes) and Infrastructure-as-Code (e.g., Terraform, CloudFormation).
System Design: Strong understanding of distributed systems principles, concurrency, and scalability patterns.
Nice-to-Haves
Strong Advantage: Apache Iceberg (Lakehouse/S3/Glue), Apache Spark (Optimization), Message Queues (Kafka/Kinesis), Graph Databases (Experience with schema design, cluster setup, and ongoing management of engines like Amazon Neptune or Neo4j).
Tech Stack: Orchestration (Temporal/Dagster/Airflow), Modern Data Stack (dbt/DuckDB), Streaming (Flink/Kafka Streams), Observability (Datadog/Grafana).
Skills: FinOps (Cost Explorer/Spot instances), CI/CD & DevOps, Data Governance (GDPR), Pydantic, and Mentorship/Leadership experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8531320
סגור
שירות זה פתוח ללקוחות VIP בלבד