דרושים » דאטה » Senior BI Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior BI Data Engineer to join our BI team and take end-to-end ownership of high-impact analytics foundations. This role sits at the core of how our company measures success, makes decisions, and scales - turning raw data into trusted, business-critical insights used across Product, GTM, and Finance.
Youll design and evolve data models, pipelines, and the BI layer, work closely with Data Science and business stakeholders, and help raise the bar for analytics engineering across the company.
Hands-on experience using GenAI to improve analytics engineering workflows, automate development processes, and increase delivery speed is a must for this role.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy owning data foundations end to end - from raw data to semantic layers
You like turning ambiguous business questions into clear, governed metrics
You care about data quality, performance, and trust at scale
You enjoy mentoring, setting standards, and leading by example
You actively leverage AI tools to improve development speed and analytical accuracy.
Requirements:
5+ years of experience in BI / Data Engineering roles with ownership of scalable data platforms
Deep experience with modern data stacks (Snowflake or Databricks, dbt)
Advanced SQL and Python skills, including data quality, CI/CD, and observability
Strong understanding of dimensional modeling, data warehousing, and semantic layers
Experience with orchestration tools (Airflow) and large-scale data processing
Proven experience using GenAI tools as part of your day-to-day development workflow
A strong builder mindset, business orientation, and ability to lead cross-functional initiatives
Nice to have:
Experience with streaming technologies (Kafka, Spark).
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595429
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
looking for a Data Engineer to help build and scale our analytics data infrastructure. In this role, you will work closely with analysts and business stakeholders to design reliable data models and support the development of a centralized semantic layer used across the company.

You will play a key role in improving the structure, reliability, and usability of our data stack. This includes building and maintaining dbt models, supporting data pipelines, and ensuring analysts have access to clean, well-documented, and consistent data.

This role is ideal for someone who enjoys working at the intersection of data engineering and analytics - translating business needs into scalable data models and enabling teams to move faster with trusted data.

Responsibilities

Design and implement data models that support analytics across key business domains such as GTM, CX, and Finance
Build and maintain transformation workflows using dbt
Work closely with analysts to translate business questions into scalable and reusable data models
Help define and implement a structured semantic layer that enables consistent metrics across the company
Improve the reliability and clarity of the analytics data stack by centralizing logic into well-designed data models
Support the ingestion and transformation of data from various sources using tools such as Fivetran and Airbyte
Contribute to improving data quality, monitoring, and documentation practices
Help establish best practices for analytics modeling and data usage across teams
Actively leverage AI tools (e.g. Cursor, LLM-based assistants) to improve development speed, data modeling, and data workflows
Requirements:
2-4 years of experience in bi/data engineering, analytics engineering or a similar role.
Strong SQL skills and experience working with modern data warehouses.
Experience building and maintaining data models for analytics.
Familiarity with modern data stack tools such as dbt, Snowflake/Bigquery, Fivetran/Rivery, or similar.
Experience collaborating with analysts or BI teams.
Familiarity with Python for data-related tasks (scripting, automation, or tooling).
Hands-on experience using AI tools (e.g. Cursor, LLMs) as part of day-to-day development workflows.
Strong problem-solving skills and the ability to work in evolving data environments.
Clear communicator who can work effectively with both technical and non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595374
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an Experienced Data Engineer to join our marketing team and take end-to-end ownership of our data platform and production data pipelines. In this role, you will be responsible for building robust, scalable, and observable data systems that power analytics, reporting, and downstream business use cases. You will work deeply hands-on with data infrastructure, modeling, and orchestration, and act as a key technical partner to Marketing, Sales Product and Business and Finance teams.
This role suits someone who enjoys working close to the metal, designing systems that scale, and solving ambiguous data problems in a dynamic startup environment. You will play a critical role in shaping how data flows through the company, setting engineering standards, and ensuring data is trustworthy, performant, and ready for growth.
What You'll Do:
Design, build, and maintain scalable, reliable data pipelines and data warehouse architectures to support analytics and business intelligence needs.
Own the end-to-end ETL/ELT processes - ingesting data from internal and external sources, transforming it, and making it analytics-ready.
Model and optimize data structures (fact tables, dimensions, semantic layers) to support performant querying and reporting.
Ensure high standards of data quality, integrity, observability, and reliability across all data assets.
Partner closely with Analytics, Product, Marketing, and Finance teams to understand data requirements and deliver robust data solutions.
Implement monitoring, alerting, and testing frameworks to proactively identify data issues.
Optimize warehouse performance and cost efficiency (query optimization, partitioning, clustering, etc.).
Identify gaps in data collection and work with engineering teams to improve instrumentation and data availability.
Support experimentation and analytics use cases by enabling clean, trustworthy datasets for A/B testing and analysis.
Document data models, pipelines, and best practices to support scale and knowledge sharing.
Requirements:
Bachelors or Masters degree in Computer Science, Data Engineering, Software Engineering, or a related technical field.
3-5 years of hands-on experience as a Data Engineer, preferably in a SaaS or technology-driven environment.
Strong experience designing and maintaining data warehouses (e.g., Snowflake, BigQuery, Redshift).
Proven expertise with ETL/ELT tools and frameworks (e.g., Airflow, dbt, Talend, SSIS, Informatica, or similar).
Advanced SQL skills and solid proficiency in Python (or similar languages) for data processing and orchestration.
Strong understanding of data modeling, warehousing best practices, and analytics engineering concepts.
Experience integrating data from business systems such as Salesforce, HubSpot, or other SaaS platforms.
Familiarity with SaaS metrics and business concepts (ARR, churn, LTV, CAC) - from a data modeling perspective.
Experience supporting BI tools and analytics consumers (Tableau, Looker, Power BI, etc.).
Strong problem-solving skills, attention to detail, and a passion for building reliable data foundations.
Excellent communication skills and the ability to collaborate across technical and non-technical teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8563348
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 18 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer to take ownership of our evolving data platform.

Our data environment is entering a significant growth phase. We are strengthening our Redshift-based warehouse, expanding our transformation capabilities with dbt, and investing in modern engineering standards across the stack.

In parallel, we are building a dedicated application layer for delivering data products. This role is responsible for designing and owning the data foundations that support it.

AI-assisted development is not a side experiment. It is a core part of how we engineer. We expect this role to actively leverage advanced AI development tools as part of daily work, accelerating design, implementation, validation and documentation while maintaining strong architectural judgment and production-level quality.

This position requires architectural thinking, long-term platform vision and the ability to lead complex initiatives from design through production in an AI-augmented engineering environment.

What You Will Do:

Own the architecture and evolution of our cloud-based Data Warehouse.

Lead complex data initiatives end to end, from requirements definition and technical design through implementation, deployment and post-production optimization.

Apply AI-assisted development workflows to improve engineering velocity, code quality and system reliability.

Design and evolve transformation standards and modeling practices using dbt as a strategic layer within the team.

Architect the data foundations that power our data-driven applications.

Translate business needs into structured, production-grade data solutions.

Drive technical standards, consistency and engineering discipline across the data stack.

Take full accountability for delivery, stability and long-term scalability of the systems you build.
Requirements:
8+ years of experience in Data Engineering with strong focus on modern cloud-based DWH architecture.

Proven experience leading projects end to end in production data environments.

Deep production experience with Redshift, dbt and orchestration frameworks such as Airflow.

Strong SQL expertise and solid understanding of dimensional modeling and transformation best practices.

Practical, daily experience using AI development tools as part of your engineering workflow.

Ability to critically evaluate AI-generated output and maintain high architectural and production standards.

Strong system design capabilities and full ownership mindset.

Experience supporting data products or application-facing data environments.

Ability to balance speed with quality and long-term architectural thinking.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8596952
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced Data Engineer to join our DataWarehouse team in TLV.
In this role, you will play a pivotal role in the Data Platform organization, leading the design, development, and maintenance of our data warehouse. In your day-to-day, youll work on data models and Backend BI solutions that empower stakeholders across the company and contribute to informed decision-making processes all while leveraging your extensive experience in business intelligence.
This is an excellent opportunity to be part of establishing our state-of-the-art data stack, implementing cutting-edge technologies in a cloud environment.
We believe three things matter for every role: drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role youll:
Lead the design and development of scalable and efficient data warehouse and BI solutions that align with organizational goals and requirements
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into the semantic layer
Develop processes to enforce schema evaluation, cover anomaly detection, and monitor data completeness and freshness
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency
Implement best practices for data warehouse and database performance tuning
Conduct thorough testing of data applications and implement robust validation processes
Collaborate with Data Infra Engineers, Developers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers
Requirements:
3+ years of experience as a BI Engineer or Data Engineer
Proficiency in data modeling, ELT development, and DWH methodologies
SQL expertise and experience working with Snowflake or similar technologies
Prior experience working with DBT
Experience with Python and software development, an advantage
Excellent communication and collaboration skills
Ready to work in an office environment most days of the week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8594839
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer II - GenAI
20718
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560110
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Software Engineer (Data Platforms) to join the Users & Integrations team within our companys Intelligence Group. This role is built for an experienced engineer who thrives on solving complex backend challenges and scaling data pipelines.
In this role, you will take ownership of crucial user data integrations and architect the sophisticated matching logic that powers our platform from data ingestion and transformation to delivery. You will work extensively with large-scale data pipelines, translate complex algorithms into high-performance production code, and tackle massive scalability challenges to enhance the data experience for our companys customers
Where does this role fit in our vision?
Every role at our company is designed with a clear purpose. At our company, data is everything; its at the heart of everything we do. The Intelligence Group is responsible for shaping the experience of hundreds of thousands of users who rely on our data daily.
The Users Team is the engine behind our companys data connectivity, handling massive-scale user data integrations and engineering complex entity-matching logic. By translating millions of data signals and advanced algorithms into high-performance pipelines, we ensure users receive highly accurate, tailored data - optimizing their overall experience while driving the core KPIs of our Intelligence Group.
What will you be responsible for?
Designing, building, and maintaining robust, scalable ETL/ELT data pipelines and integration solutions within our companys Databricks-based environment.
Implementing and optimizing algorithms for data processing and entity resolution with a strong emphasis on delivering high-quality, high-throughput data.
Deploying data infrastructure leveraging technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
Designing innovative data solutions that support millions of data points, at high performance and massive scale.
Requirements:
What we look for:
3+ years of software engineering experience building scalable backend systems
Experience scaling big data pipelines, complex data integrations, and robust data infrastructure.
Expertise in big data technologies, including Spark (or Databricks), Kafka (or other real-time streaming tools), and workflow orchestrators like Airflow.
Experience using GenAI tools for software development (such as Cursor, Claude Code, Codex, etc).
A strong builder mindset, with experience turning ideas into working solutions
Algorithmic experience, including developing and optimizing machine learning models and implementing advanced data algorithms.
Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or comparable cloud environments (Azure/GCP).
Expertise in extracting, ingesting, and transforming large datasets efficiently.
A passion for sharing knowledge, fostering a supportive engineering culture, and engaging in collaborative problem-solving with your peers.
Bonus Points:
Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8595416
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer I - GenAI Foundation Models
21679
Leadership/Team Quote:
This opening is for the Content Intelligence team within the Marketplace AI department.
The Content Intelligence team is at the forefront of Generative AI innovation, driving solutions for travel-related chatbots, text generation and summarization applications, Q&A systems, and free-text search. Beyond this, the team is building a cutting-edge platform that processes millions of images and textual inputs daily, enriching them with ML capabilities. These enriched datasets power downstream applications, helping personalize the customer experience-for example, selecting and displaying the most relevant images and reviews as customers plan and book their next vacation.
Role Description:
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8560108
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Backend Engineer - Data Platform to join our expanding team and play a crucial role in designing, building, and maintaining robust and scalable data pipelines and infrastructure. In this role, you will directly enable data-driven decision-making and support the development and deployment of AI/ML products that power Health.

Youll collaborate closely with engineering, product, and data science teams to ensure our data systems are high-quality, resilient, and scalable as we grow. As a Senior Backend Engineer on our Data Platform team, you will drive efforts to deliver reliable, efficient, and consistent data services across the organization. You will also help enable the rapid development and deployment of advanced features, insights, and AI-driven capabilities that improve outcomes for clinicians and clients.

Who are you?
You are a seasoned backend or data engineer with experience working on production-grade ML/AI-powered products. You thrive in fast-paced, high-ownership environments and are passionate about building scalable and reliable systems. You understand the unique requirements of delivering AI/ML features in production, and you are comfortable working with modern technologies in the LLM/RAG ecosystem.
You pride yourself on delivering high-quality solutions quickly, without sacrificing design or reliability. Youre known for your responsiveness, collaborative spirit, and service-oriented mindset-especially when youre on-call and the stakes are high.How will you contribute?
Design, implement, and maintain scalable and reliable data pipelines and backend systems supporting both operational and analytical needs, with a focus on ML/AI product enablement.
Ensure data processing is optimized for speed, efficiency, and fault tolerance, enabling seamless integration with AI/ML workflows and reliable performance across all our Health products.
Monitor and improve uptime, reliability, and observability of our data infrastructure and pipelines.
Build and maintain systems to ensure data quality, consistency, and usability across the organization, enabling advanced analytics and AI solutions.
Work closely with product and engineering teams to deliver new features rapidly and with a high standard of technical excellence.
Drive innovation in how we build, measure, and optimize data features, backend services, and AI product integrations.
Participate in on-call rotations with a service-oriented approach and fast responsiveness.
Lead scalability efforts to support increasing data volumes, expanding AI/ML initiatives, and new product launches.
Requirements:
What qualifications and skills will help you to be successful?
At least 5 years of experience with Python in backend or data engineering roles, designing and operating large-scale data pipelines, backend services, and data infrastructure in production environments.
Hands-on experience working on ML/AI-powered products in production, with strong understanding of requirements for integrating data platforms with AI features.
Familiarity with modern LLM (Large Language Model) and RAG (Retrieval-Augmented Generation) technologies, and experience supporting their deployment or integration.
Familiar with or have worked with these technologies (or alternatives):
Data Processing & Streaming: Apache Spark, DBT, Airflow, Airbyte, Kafka
API Development: FastAPI, micro-service architecture, SFTP
Data Storage: Data Lakehouse architectures, Apache Iceberg, Vector Databases, RDS
ML/AI: ML/LLM libraries and frameworks (such as Gemini, Hugging Face, etc.)
Cloud Infrastructure: AWS stack (S3, Firehose, Lambda, Athena, etc.), Kubernetes (K8s)
Demonstrated ability to optimize performance and ensure high availability, scalability, and reliability of backend/data systems.
Strong foundation in best practices for data quality, governance, security, and observability.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8588707
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
22/02/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer.
As a Senior Data Engineer, you will play a key role in shaping and driving our analytics data pipelines and solutions to empower business insights and decisions. Collaborating with a variety of stakeholders, you will design, develop, and optimize scalable, high-performance data analytics infrastructures using modern tools and technologies. Your work will ensure data is accurate, timely, and actionable for critical decision-making.
Key Responsibilities:
Lead the design, development, and maintenance of robust data pipelines and ETL processes, handling diverse structured and unstructured data sources.
Collaborate with data analysts, data scientists, product engineers and product managers to deliver impactful data solutions.
Architect and maintain the infrastructure for ingesting, processing, and managing data in the analytics data warehouse.
Develop and optimize analytics-oriented data models to support business decision-making.
Champion data quality, consistency, and governance across the analytics layer.
Requirements:
5+ years of experience as a Data Engineer or in a similar role.
Expertise in SQL and proficiency in Python for data engineering tasks.
Proven experience designing and implementing analytics-focused data models and warehouses.
Hands-on experience with data pipelines and ETL/ELT frameworks (e.g Airflow, Luigi, AWS Glue, DBT).
Strong experience with cloud data services (e.g., AWS, GCP, Azure).
A deep passion for data and a strong analytical mindset with attention to detail.
Bonus points:
Strong understanding of business metrics and how to translate data into actionable insights
Experience with data visualization tools (e.g., Tableau, Power BI, Looker)
Familiarity with data governance and data quality best practices
Excellent communication skills to work with cross-functional teams including data analysts, data scientists, and product managers
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8555686
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
21/03/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Staff Software Data Engineer to join our Engineering team and lead the evolution of our next-generation data platform. In this high-impact role, you will operate as a player-coach: you will be the technical visionary responsible for designing the ecosystem, while remaining deeply hands-on to implement scalable, secure, and intelligent solutions that power everything from operational reporting to advanced GenAI applications.

You will bridge the gap between complex business requirements and technical execution, advocating for a data-first culture. This role offers a clear growth path: while it currently starts as an individual contributor position, it has the potential to evolve into a leadership role.

Why join us?

we are the AI-powered platform for finance automation, elevating how finance teams operate in the global economy. We empower our customers to scale faster and smarter by removing the complexities of doing global business and accelerating their finance operations efficiency. Our platform provides a comprehensive suite of finance automation solutions designed for mid-market businesses across accounts payable, global payouts, procurement, employee expenses, corporate cards, supplier management, tax compliance, and treasury. our partners with leading financial institutions such as Citi, Wells Fargo, J.P. Morgan, and Visa, enabling over 5,000 global companies to efficiently and securely pay millions of suppliers and payees across 200+ countries and territories, in 120 currencies.

At our company, we pride ourselves on our collaborative culture, the quality of our product and the capabilities of our people. we are passionate about the work they do, and keen to get the job done. we offer competitive benefits, a flexible workplace, career coaching, and an environment where diverse individuals can thrive and make an impact. Our culture ensures everyone checks their egos at the door and stands ready to reach for success together.

Founded in Israel in 2010, our company is a global business headquartered in the San Francisco Bay Area (Foster City) with offices in Tel Aviv, Plano, Toronto, Vancouver, London, Amsterdam, Tbilisi and Medellin.
About the Role

Architecture & Hands-on Execution: Design and actively build a comprehensive data platform. You will not just oversee infrastructure; you will write the core code and build tools that support diverse workloads-from operational reporting to complex analytical queries.
Strategic & Technical Delivery: Partner with product managers to translate business objectives into technical strategies, then lead the engineering effort to deliver them.
Technology Evaluation: Continuously evaluate, prototype, and select best-in-class technologies to future-proof our data stack.
Technical Leadership & Mentorship: Act as a primary advocate for platform adoption. You will foster a community of practice around data engineering, mentoring senior and mid-level engineers to elevate the team's technical bar.
Governance & Quality: Implement and automate robust frameworks for Data Discovery, Quality, and Governance, ensuring solutions are trustworthy and compliant with financial regulations.
Requirements:
We are looking for a highly motivated Staff Engineer with a strong sense of ownership, eager to tackle technical challenges in a high-throughput data processing environment.

Experience: 8+ years of hands-on experience in Data Engineering and Architecture, with a track record of building and shipping platforms at scale.
Experience with modern big data platforms such as Snowflake, Databricks, or similar technologies.
Hands-on experience with Data infrastructure experience (Orchestration, scalability, reliability, and cloud architecture).
Data Movement & Integration: Deep understanding of data movement strategies, including high-frequency batching, CDC, and real-time event streaming.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8585918
סגור
שירות זה פתוח ללקוחות VIP בלבד