דרושים » תוכנה » Senior BI Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 11 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior BI Data Engineer to join our BI team and take end-to-end ownership of high-impact analytics foundations. This role sits at the core of how measures success, makes decisions, and scales - turning raw data into trusted, business-critical insights used across Product, GTM, and Finance.
Youll design and evolve data models, pipelines, and the BI layer, work closely with Data Science and business stakeholders, and help raise the bar for analytics engineering across the company.
Hands-on experience using GenAI to improve analytics engineering workflows, automate development processes, and increase delivery speed is a must for this role.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy owning data foundations end to end - from raw data to semantic layers
You like turning ambiguous business questions into clear, governed metrics
You care about data quality, performance, and trust at scale
You enjoy mentoring, setting standards, and leading by example
You actively leverage AI tools to improve development speed and analytical accuracy
Requirements:
5+ years of experience in BI / Data Engineering roles with ownership of scalable data platforms
Deep experience with modern data stacks (Snowflake or Databricks, dbt)
Advanced SQL and Python skills, including data quality, CI/CD, and observability
Strong understanding of dimensional modeling, data warehousing, and semantic layers
Experience with orchestration tools (Airflow) and large-scale data processing
Proven experience using GenAI tools as part of your day-to-day development workflow
A strong builder mindset, business orientation, and ability to lead cross-functional initiatives
Nice to have:
Experience with streaming technologies (Kafka, Spark).
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8645770
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 11 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Analyst to join our Data Group and play a central role in how data drives our product. This role sits at the intersection of data, product, and business, turning complex datasets into insights that shape decisions and impact hundreds of thousands of usrs.
Youll own analytics end to end for a core dataset, work closely with Product, R&D, and Data Engineering, and help define how data quality, enrichment, and performance are measured and improved.
Hands-on experience using GenAI to accelerate analysis, automate workflows, improve data exploration, and enhance insight generation is a must for this role.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy owning data domains end to end and being the go-to expert for your dataset
You like working on complex, high-impact data problems that affect product and users
Youre comfortable partnering closely with Product, R&D, and Data Engineering
You enjoy turning deep analysis into clear, actionable insights
You actively leverage AI tools to improve analytical speed, quality, and decision-making
You care about ownership, outcomes, and real impact
Requirements:
5+ years of experience as a product-focused Data Analyst in data-driven tech companies
Strong proficiency in SQL and/or Python for data analysis
Excellent analytical thinking and the ability to simplify complex data into insights that drive decisions
Experience building dashboards and analytical tools used by decision-makers
Proven experience using GenAI tools as part of your daily analytical workflow
A strong builder mindset and ability to move from problem to solution
Nice to have:
Experience working with statistical or machine learning models within data pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8645751
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 11 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Marketing Operations Engineer to join our Marketing Technologies & Operations team as a true technical builder. This role sits at the core of how scales its Product-Led Growth (PLG) motion - building the infrastructure that turns product signals into revenue.
Youll operate at the intersection of Marketing, Sales, Data, and Engineering - going beyond campaign execution to design systems, automate workflows, and deploy AI-driven solutions that power customer journeys at scale.
Youll lead the technical side of our customer engagement platforms, develop our orchestration layer (n8n / Make), and build the foundations that connect product usage with go-to-market execution.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You see marketing systems as something to build - not just manage
You enjoy designing workflows and automations across multiple tools and platforms
You like turning messy, fragmented data into clean, actionable flows
Youre excited about using AI to automate and scale go-to-market processes
Youre comfortable working across Marketing, Sales, Data, and Engineering
You take full ownership - from idea to production and iteration
You thrive in fast-moving, high-impact, AI-first environments
Requirements:
3+ years of experience with marketing automation and customer engagement platforms (Braze, Marketo, HubSpot or similar)
Hands-on experience with iPaaS tools (n8n, Make, Zapier)
Strong understanding of APIs, webhooks, and JSON-based integrations
Proficiency in Python or JavaScript for scripting, data transformation, and integrations
Solid data fluency, including experience with Salesforce and product analytics tools (e.g., Amplitude)
Basic SQL skills and understanding of data flows into data warehouses
Proven ability to own projects end to end - from design to deployment and iteration
Strong builder mindset with a proactive, problem-solving approach
Fluent English (written and spoken)
Hands-on experience using GenAI tools to improve workflows, automation, and execution
Nice to Have:
Proven experience with Braze - advantage
Experience working in a PLG B2B SaaS environment
Familiarity with Snowflake or modern data platforms
Experience with data enrichment tools (e.g., Lusha or similar)
Experience building or experimenting with AI agents and LLM-based workflow
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8645848
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Engineer to own high-impact data products from architecture through production deployment, monitoring, and continuous improvement. This isnt a pure infrastructure role - youll combine strong engineering with product thinking, operational excellence, and awareness of data quality, cost, and business impact.
You will design, implement, test, deploy, and maintain production-grade data products - pipelines, transformation layers, data quality and reliability systems - using tools like DBT (on Spark) and Databricks. Youll apply best practices in Python and SQL to build scalable and maintainable data transformations, and leverage technologies like LLMs and GenAI to create innovative solutions for real business problems.
This role is ideal for someone who wants technical leadership responsibilities in an AI-first engineering culture - we use LLMs, GenAI, and AI-native development tools as core parts of our daily workflow.
Key Responsibilities:
Act as a technical leader within the team - raise engineering standards, drive strong architectural choices, and improve how we build
Own data products end-to-end: design, development, deployment, monitoring, and iteration
Work closely with senior leadership to translate strategic goals into scalable data solutions
Develop and maintain production ETL/ELT pipelines using DBT (on Spark) and orchestrated workflows in Databricks
Build monitoring, alerting, and testing pipelines to ensure reliability and performance in production
Evaluate and introduce new technologies - including AI-native development tools - and integrate the ones that create real impact
Collaborate with customers and external data providers - gathering requirements and making product decisions.
Mentor team members through code reviews, pairing, and knowledge sharing
Requirements:
4+ years of experience in production-level data engineering or similar roles
Deep proficiency in SQL and Python
Proven track record of owning and scaling production-grade data pipelines, including versioning, testing, and monitoring
Strong understanding of data modeling, normalization/denormalization trade-offs, and data quality management
Experience with the modern data stack: DBT, Databricks, Spark, Delta Lake
Strong analytical skills - ability to design and evaluate data-driven hypotheses and KPIs
Product and business awareness - you think about the impact of what you build, not just the implementation
Preferred Qualifications:
Experience with GenAI and LLM applications - particularly extracting structure from unstructured data at scale
Experience working with external data sources and vendors
Familiarity with Unity Catalog and data governance at scale
Familiarity with Terraform or similar infrastructure-as-code tools
Experience with cost optimization on Databricks (DBU analysis, cluster policies)
Familiarity with cloud-native platforms (AWS preferred)
BSc/BA in Computer Science, Engineering, or a related technical field - or graduation from a top-tier IDF tech unit
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602225
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 11 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to build and scale the data infrastructure behind our Sales Streaming platform. This role is about owning pipelines that process massive volumes of data and power real-time, AI-driven features used by millions of sales professionals.
Youll work end to end - from data lakes to real-time streaming - collaborating closely with Data Science, ML, and Product teams to turn complex data into high-impact product capabilities.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy building data systems that run at scale and serve real users
You like owning your work end to end, from design to production
Youre a problem solver who enjoys turning messy data into reliable systems
You value autonomy, impact, and fast decision-making
Youre comfortable working in dynamic, AI-forward environments
Requirements:
5+ years of experience building scalable data systems
Strong Python and SQL skills
Experience using GenAI for software development and improving work processes
Hands-on experience with modern data stacks (Spark, Airflow, AWS, Kubernetes)
Experience with batch and streaming data pipelines
A strong builder mindset, curiosity, and willingness to learn
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8645803
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Analytics Engineer to help design and build the engineering foundation that powers analytics across the organization.
Our goal is to create a modern data environment where analytics development is fast, reliable, scalable, and increasingly automated. This includes building strong data warehouse foundations, scalable modeling layers, and introducing AI-powered tools and automation that accelerate how data products are built and used.
In this role, you will be part of an analytics squad, working closely with analysts and business stakeholders while building the infrastructure, automation frameworks, and intelligent tooling that enable analytics to scale across the organization.
This is a unique opportunity to help build the next generation of the data organization.
Key Responsibilities
Lead AI adoption in the analytics platform, building tools and workflows that automate analytics development, dashboards, and data exploration
Design and build scalable data warehouse models and transformation layers
Build and optimize ETL pipelines and core analytics infrastructure (Bronze / Silver)
Improve performance, reliability, and scalability of the analytics platform
Develop automation and internal tools that accelerate analytics workflows
Enable self-serve data access across the company through semantic layers and reusable datasets
Collaborate with analysts and business teams within an analytics squad.
Requirements:
6+ years of experience in Data Engineering and Analytics Engineering roles, building modern data warehouses and analytics platforms using technologies such as BigQuery, dbt, and Python
Experience with workflow orchestration (Dagster, Airflow, or equivalent) and building reliable, observable data pipelines
Hands-on experience using AI coding platforms and tools to automate data engineering and analytics workflows
Strong engineering practices including version control (Git), testing, code reviews, and CI/CD
Experience building automation systems and internal tools for data teams
Experience working closely with analysts, product teams, and business stakeholders in analytics-driven environments
Strong problem-solving skills with a builder mindset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600360
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are at a pivotal stage in building and scaling our data domain, and we are looking for a Data Engineer to join our growing BI team. This role goes beyond building pipelines. You will help shape our data platform as a shared product - supporting analytics, reporting, and decision-making across key company data domains such as Product, Sales, HR, and others. Your work will directly influence how stakeholders interact with data today and how the platform evolves in the years ahead.

What Youll Be Doing
Architect & Own: Lead the design and development of scalable data warehouse and BI solutions. You will make early-stage architectural decisions and own their long-term impact.
Infrastructure as a Product: Build core data infrastructure and developer experiences that others rely on, ensuring high availability and system reliability.
End-to-End ELT/ETL: Solve complex integration problems by sourcing data from structured and unstructured sources using Rivery, Python, and optimal ETL patterns.
Data Quality & Governance: Implement frameworks for schema evolution, anomaly detection, and data freshness. You will determine security models based on privacy requirements and evolve governance processes.
Strategic Collaboration: Partner with Engineers, Product Managers, and Data Analysts to conceptualize data needs and represent key insights in a meaningful way.
Optimization: Assist in owning production processes, optimizing complex code through advanced algorithmic concepts to manage operational cost-benefit tradeoffs.
Requirements:
Experience: 5+ years of experience in Data Engineering, Infrastructure, or Platform Engineering (ideally in organizations operating at a meaningful scale).
Technical Mastery: 5+ years of hands-on experience with Python and SQL. Deep proficiency in data modeling (Star/Snowflake schema) and DWH methodologies.
Cloud & Tools: Proven experience with Snowflake and AWS. Familiarity with Rivery or similar orchestration tools (like DBT) is a major advantage.
Production-First Mindset: Track record of leading data initiatives end-to-end from design and building to shipping and operating production flows.
Analytical Rigor: Ability to triage issues, resolve data quality problems, and design systems that handle system complexity with ease.
Education: Bachelors degree in Computer Science, Computer Engineering, or a relevant technical field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8643675
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're looking for a Data Warehouse Tech Lead to drive the technical vision and execution of our data infrastructure that powers decision-making across.
You'll lead both the technology and the business coordination for our data warehouse - architecting scalable solutions while working closely with stakeholders and data providers to ensure our platform serves the entire organization's needs. This role combines deep technical leadership with strategic business partnership as we build next-generation data stack.
We believe three things matter for every role : drive to push through challenges, efficiency that keeps standards high while moving fast, and adaptability that lets you pivot with data and AI insights. These aren't buzzwords, they're how we actually work.
Our AI-first approach isn't just a tagline either. We're building the future of insurance with AI at the center, and we need people who are genuinely excited to learn and grow alongside these tools.
In this role you'll:
Lead technical architecture - design and develop scalable data warehouse solutions that support multiple products and serve the entire organization's analytics needs
Manage the technical roadmap - set strategy and guide execution for the Data Warehouse team, ensuring our platform evolves with business requirements
Drive business process coordination - translate business needs into technical requirements while establishing clear data contracts with R&D, Analytics, and external data providers
Establish and implement best practices - set technical standards for data warehouse architecture, performance tuning, and development methodologies that guide the entire team's approach to building scalable data solutions
Create and maintain sustainable data pipelines - build resilient systems capable of handling unstructured data and managing an evolving schema registry across diverse data sources
Implement advanced data modeling - create robust data structures using methodologies like dimensional modeling, and optimize ETL/ELT processes for our semantic layer
Establish data quality standards - build processes for schema evaluation, anomaly detection, and monitoring data completeness and freshness across all sources
Lead cross-team collaboration - work directly with Data Engineers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers to align technical solutions with business goals
Requirements:
7+ years as a BI Engineer or Data Engineer, with 2+ in a technical leadership or architect role
Proven experience managing complex data warehouses that serve multiple products and entire organizations
Strong expertise in data modeling, ELT development, and data warehouse methodologies
Advanced SQL skills and hands-on experience with Snowflake or similar cloud-native data warehouse platforms
Extensive experience with dbt for data transformation and modeling
Python and software development experience (a strong plus)
Excellent communication skills - you can mentor technical team members and explain complex data concepts to business stakeholders
Ready to work in an office environment most days of the week
Enthusiasm about learning and adapting to the exciting world of AI - a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8644041
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and English Speakers
we are looking for a Senior Data Engineer I.
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
21679
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools lke NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8627496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Data Engineer.
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and ore.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
20718
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8627494
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a strong, hands-on Data Engineer to join our team and play a key role in building our data infrastructure from the ground up. In this role, you will design and implement scalable data pipelines and platforms, supporting both batch and real-time use cases. You will work closely with analysts and stakeholders to deliver reliable, high-quality data solutions, and take full ownership of data flows - from ingestion to consumption. This is a great opportunity for an executor who enjoys building, moving fast, and making an impact.
What will your job look like?
Design, build, and maintain robust and scalable data pipelines (batch and real-time) end-to-end.
Design and implement scalable, flexible data architectures to support evolving business needs.
Build and manage data platforms, including data lakes and data warehouses.
Integrate multiple data sources (structured and unstructured) into a unified data platform using batch (ETL) and real-time streaming solutions.
Design and implement efficient data models, schemas, and database structures (SQL / NoSQL).
Develop and implement data quality processes to ensure accuracy, consistency, and reliability.
Monitor, optimize, and troubleshoot data infrastructure to meet performance and SLA requirements.
Requirements:
5+ years of hands-on experience as a Data Engineer, building data systems from scratch in dynamic environments.
Bachelors degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
Strong proficiency in Python and advanced SQL, with solid experience in data modeling.
Proven experience designing and building scalable data pipelines (batch and real-time), including streaming technologies such as Kafka.
Strong experience working with AWS, including services such as S3, Athena and DynamoDB.
Experience working with big data processing frameworks such as Spark, and columnar data formats (e.g., Parquet).
Hands-on experience with workflow orchestration tools such as Airflow.
Strong ownership and execution mindset, with excellent problem-solving skills and high attention to detail, and the ability to collaborate effectively and deliver in ambiguous, fast-paced environments.
Experience with data platform technologies such as Databricks, Snowflake - Advantage.
Experience building data platforms using modern lakehouse technologies (e.g., Iceberg) - Advantage.
Fluent in English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8636352
סגור
שירות זה פתוח ללקוחות VIP בלבד