דרושים » הנדסה » Senior Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 14 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer
What You'll Do:

Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8343346
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer.
As a Senior Data Engineer, youll be more than just a coder - youll be the architect of our data ecosystem. Were looking for someone who can design scalable, future-proof data pipelines and connect the dots between DevOps, backend engineers, data scientists, and analysts.
Youll lead the design, build, and optimization of our data infrastructure, from real-time ingestion to supporting machine learning operations. Every choice you make will be data-driven and cost-conscious, ensuring efficiency and impact across the company.
Beyond engineering, youll be a strategic partner and problem-solver, sometimes diving into advanced analysis or data science tasks. Your work will directly shape how we deliver innovative solutions and support our growth at scale.
Responsibilities:
Design and Build Data Pipelines: Architect, build, and maintain our end-to-end data pipeline infrastructure to ensure it is scalable, reliable, and efficient.
Optimize Data Infrastructure: Manage and improve the performance and cost-effectiveness of our data systems, with a specific focus on optimizing pipelines and usage within our Snowflake data warehouse. This includes implementing FinOps best practices to monitor, analyze, and control our data-related cloud costs.
Enable Machine Learning Operations (MLOps): Develop the foundational infrastructure to streamline the deployment, management, and monitoring of our machine learning models.
Support Data Quality: Optimize ETL processes to handle large volumes of data while ensuring data quality and integrity across all our data sources.
Collaborate and Support: Work closely with data analysts and data scientists to support complex analysis, build robust data models, and contribute to the development of data governance policies.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field.
Experience: 5+ years of hands-on experience as a Data Engineer or in a similar role.
Data Expertise: Strong understanding of data warehousing concepts, including a deep familiarity with Snowflake.
Technical Skills:
Proficiency in Python and SQL.
Hands-on experience with workflow orchestration tools like Airflow.
Experience with real-time data streaming technologies like Kafka.
Familiarity with container orchestration using Kubernetes (K8s) and dependency management with Poetry.
Cloud Infrastructure: Proven experience with AWS cloud services (e.g., EC2, S3, RDS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8320416
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8325726
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
The opportunity
Join our dynamic Data & ML Engineering team in iAds and play a pivotal role in driving data solutions that empower data science, finance, analytics, and R&D teams. As an Experienced Data Engineer, you'll work with cutting-edge technologies to design scalable pipelines, ensure data quality, and process billions of data points into actionable insights.
Success Indicators:
In the short term, success means delivering reliable, high-performance data pipelines and ensuring data quality across the product. Long-term, you'll be instrumental in optimizing workflows, enabling self-serve analytics platforms, and supporting strategic decisions through impactful data solutions.
Impact:
Your work will directly fuel business decisions, improve data accessibility and reliability, and contribute to the team's ability to handle massive-scale data challenges. You'll help shape the future of data engineering within a global, fast-paced environment.
Benefits and Opportunities
You'll collaborate with talented, passionate teammates, work on exciting projects with cutting-edge technologies, and have opportunities for professional growth. Competitive compensation, comprehensive benefits, and an inclusive culture make this role a chance to thrive and make a global impact.
What you'll be doing
Designing and developing scalable data pipelines and ETL processes to process massive amounts of structured and unstructured data.
Collaborating with cross-functional teams (data science, finance, analytics, and R&D) to deliver actionable data solutions tailored to their needs.
Building and maintaining tools and frameworks to monitor and improve data quality across the product.
Providing tools and insights that empower product teams with real-time analytics and data-driven decision-making capabilities.
Optimizing data workflows and architectures for performance, scalability, and cost efficiency using cutting-edge technologies like Apache Spark and Flink.
Requirements:
4+ yeasrs of experience as a Data Engineer
Expertise in designing and developing scalable data pipelines, ETL processes, and data architectures.
Proficiency in Python and SQL, with hands-on experience in big data technologies like Apache Spark and Hadoop.
Advanced knowledge of cloud platforms (AWS, Azure, or GCP) and their associated data services.
Experience working with Imply and Apache Druid for real-time analytics and query optimization.
Strong analytical skills and ability to quickly learn and adapt to new technologies and tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8341692
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We're looking for a hands-on, business-minded Analytics Engineer to build the infrastructure and tooling that turns this data into operational gold.
Youll be the first data hire in our new Operations Insights team, responsible for designing the operations data platform, generating insights for internal and external stakeholders, and helping our business teams deliver measurable impact. This is a hybrid role that combines data engineering with analytics and product thinkingyoull build, analyze, and consult.
From designing the data warehouse to crafting performance dashboards and investigating edge-case customer issues, youll be the go-to person who makes data accessible, trustworthy, and powerful.
A day in the life:
Design and implement the operational data platform, consolidating data from R&D systems
Build and maintain data pipelines and tables for store performance, deployment health, and system diagnostics
Collaborate with the Director of Customer Value & Insights and Expansion Managers to define KPIs and surface business insights
Conduct exploratory data analysis to support tough customer and deployment challenges
Build internal tools and dashboards that empower teams to self-serve critical data
Investigate anomalies, edge cases, and complex failure scenarios to improve deployments
Maintain data quality, documentation, and observability as we scale to hundreds of stores
Requirements:
4+ years of experience as an Analytics Engineer, Data Engineer, or Full-Stack Data Developer
Expertise in SQL and Python; experience with modern data stacks (dbt, Airflow, etc.)
Experience designing data models and pipelines for analytics and reporting
A pragmatic mindsetyou solve problems, not just build pipelines
Comfort working directly with non-technical stakeholders to understand their needs
Curiosity about the physical worldexcited to work with retail, hardware, and real-time system data
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8325848
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for our first dedicated Data Engineer a self-motivated and proactive professional with a strong can-do attitude and a sense of ownership. This role involves taking responsibility across all data domains within the company, working closely with our analytics and development teams to build and maintain the data infrastructure that supports business needs. This position is ideal for someone ready to independently lead data engineering efforts and make a meaningful impact.

Responsibilities:
Design, develop, and maintain scalable data pipelines and ETL workflows using tools such as Python, dbt, and Airflow.
Architect and optimize our data warehouse to support efficient analytics, reporting, and business intelligence at scale.
Model and structure data from multiple internal and external sources (such as Salesforce, Jira, Mixpanel, etc.) into clean, reliable, and analytics-ready datasets.
Collaborate closely with our systems architect, analytics, and development teams to translate business requirements into robust and efficient technical data solutions.
Monitor and optimize pipeline performance to ensure data completeness and scalability.
Serve as a key partner and subject-matter expert on all data-related topics within the team.
Implement data quality checks, anomaly detection and validation processes to ensure data reliability.
Requirements:
3+ years of hands-on experience as a Data Engineer or in a similar role.
Expert-level SQL skills, capable of performing complex table transformations and designing efficient data workflows.
Proficiency in Python for data processing and scripting tasks.
Experience building and maintaining ELT/ETL pipelines using dbt.
Hands-on experience with orchestration tools such as Airflow.
Deep understanding of data warehouse concepts and methodologies, including data modeling.
Self-motivated, capable of working autonomously while effectively collaborating with stakeholders to deliver end-to-end solutions.
B.Sc. in Information Systems Engineering, Computer Science, Industrial Engineering, or a related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304059
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As a customer-centric tech company, we created an insurance experience that is smart, instant, and delightful.
youll be working with a group of like-minded makers, who get a kick out of moving fast and delivering great products. We surround ourselves with some of the smartest, most motivated, creative people who are filled with positive energy and good karma.

Unlike most publicly traded companies, were nimble and efficient. We take pride in the fact that we still think and operate like a startup. We dont care much about titles and hierarchy and instead focus on innovation, bold moves, and challenging the status quo.

Were built as a lean, data-driven organization that relies on a common understating of objectives and goals to provide teams with autonomy and ownership. We dont like spending our days in meetings and we skip committees altogether. theres no such thing as going over someones head. We have zero tolerance for bureaucracy, office politics, and lean-back personalities.

As a Public Benefit Corporation and a certified B-Corp, we deliver environmental and social impact using our products and tech. Through our Giveback program, we partner with organizations such as the ACLU, New Story, The Humane Society, Malala Fund, American Red Cross, 360.org, charity: water, and dozens of others, and have donated millions towards reforestation, education, animal rights, LGBTQ+ causes, access to water, and more.
Requirements:
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of data ecosystem.
The groups mission is to build a state-of-the-art Data Platform that drives toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.
In this role youll
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams
Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights
Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights
Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions
Collaborate closely with other Staff Engineers across to align on cross-organizational initiatives and technical strategies
Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions
Share knowledge, mentor team members, and champion engineering standards and technical excellence acros
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8297111
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers while collaborating closely with product, DevOps and analytics stakeholders.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills:
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8335584
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/08/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As we continue to expand and evolve, were looking for a Data Engineer to join our growing team and help shape the future of trust and security in the decentralized world.
We have a large amount of varied, exciting, and unique data on our hands, and were already squeezing value out of it for our customersbut theres so much more in there. Your goal will be to help construct the data in ways that enable both our business users and research group to dig deeper. As our very first dedicated data engineer, youll have a huge impactbut youll also need the independence and proactiveness to own it.
What Youll Do:
Design and build complex data pipelines to ingest, process, and transform data from a variety of sources, especially logs and textual inputs.
Collaborate closely with Software Engineering and Product teams to ensure data is accessible and usable.
Develop efficient ETL processes using frameworks such as DBT, Airflow, or their equivalents.
Own and optimize your data environment (e.g., Snowflake), focusing on performance tuning, governance, and reliability.
Build dashboards for various company-wide use cases.
Implement best practices for data management, quality assurance, and security within cloud infrastructures (AWS).
Enable ML and analytics teams by building pipelines that feed feature stores and model training workflows.
Requirements:
4+ years of hands-on Data Engineering experience, in a cybersecurity or security-adjacent environment.
Proficiency in Python and SQL, with proven experience handling large or unstructured data.
Familiarity with data warehouse technologies (Snowflake, BigQuery).
Experience with big data infrastructure (Snowflake/Databricks), orchestration tools (Airflow), and cloud platform (AWS).
Solid understanding of data governance, quality assurance, and pipeline observability.
Ability to deliver end-to-end solutionsfrom ingestion to production-ready datasets with minimal supervision.
Nice to Have:
Experience in cybersecurity, threat intelligence, or blockchain data processing.
Experience orchestrating large-scale ETLs in Snowflake
Experience using DBT in production
Knowledge of OLTP databases (e.g., PostgreSQL).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8323338
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a hands-on Data Specialist to join our growing data group, working on the practical backbone of high-scale, financial-grade systems. Youll work closely with engineers, BI, product, and business stakeholders, expert in design, build, and optimize data pipelines and integrations in a cloud-native environment.
If you thrive on solving complex data challenges, enjoy getting deep into code, and want to make an impact on fintech infrastructure, wed love to meet you.
Your Day-to-Day:
Develop, maintain, and optimize robust data pipelines and integrations across multiple systems
Build and refine data models to support analytics and operational needs
Work hands-on with data orchestration, transformation, and cloud infrastructure (AWS/Azure)
Collaborate with engineering, BI, and business teams to translate requirements into scalable data solutions
Contribute to data governance, data quality, and monitoring initiatives
Support implementation of best practices in data management and observability
Requirements:
8+ years in data engineering, data architecture, or similar roles
Deep hands-on experience with PostgreSQL, Snowflake, Oracle etc
Strong experience with ETL/ELT, data integration (Kafka, Airflow)
Proven SQL and Python skills (must)
Experience with AWS or Azure cloud environments
Familiarity with BI tools (Looker, Power BI)
Knowledge of Kubernetes and distributed data systems
Experience in financial systems or fintech (advantage)
Strong ownership, problem-solving ability, and communication skills
Comfort working in a fast-paced, multi-system environment
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8327844
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/08/2025
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Backend Engineer to join our AI/ML team. In this role, youll work closely with data scientists to transform cutting-edge machine learning models into scalable, production-ready services. You will take ownership of designing, building, and maintaining the backend systems that power our AI-driven features.

This is a key position that bridges the gap between data science and production engineering, ensuring high performance, reliability, and maintainability of our ML-powered products.

Responsibilities:
Collaborate with data scientists to understand modeling outputs and convert them into deployable services.
Design and develop robust, scalable backend systems and microservices to support AI use cases.
Own the deployment and monitoring of ML models in production (with CI/CD, logging, observability).
Implement data processing pipelines in support of model training and inference.
Ensure software adheres to best practices in architecture, testing, and documentation.
Optimize model inference for latency, throughput, and resource efficiency.
Contribute to design decisions and technical strategy alongside AI and infrastructure leads.
Requirements:
Requirements:
5+ years of experience as a backend/software engineer, preferably in Python, Go, or Java.
Strong experience with designing APIs, building microservices, and integrating third-party services.
Familiarity with ML workflows: model serving, feature extraction, and batch vs real-time inference.
strong architectural/design skills, including working with message queues like Kafka, relational and NoSQL databases, and distributed systems.
Experience deploying services in containerized environments (e.g., Docker, Kubernetes).
Proficient with cloud-native tools or on-prem equivalents (e.g., logging, tracing, metrics).
Knowledge of data processing frameworks (e.g., Pandas, Spark, Airflow) is a plus.
Comfortable reading and working with Python-based ML code (scikit-learn, TensorFlow, PyTorch, etc.).
Strong ownership mindset and a collaborative attitude.

Nice to Have:
Experience with model versioning and ML serving frameworks (e.g., MLflow, Seldon, Triton).
Understanding of data privacy/security implications in model and data pipelines.
Experience working in cross-functional teams with data scientists and product owners.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8288089
סגור
שירות זה פתוח ללקוחות VIP בלבד