דרושים » דאטה » דרוש /ה data Engineer Leader

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
דרושים בOne DatAI
Location: Petah Tikva and Ramat Gan
Job Type: Full Time and Hybrid work
What youll do:
Lead design and delivery of enterprise data platforms
Own end-to-end pipelines from ingestion to serving
Architect scalable lakehouse solutions on Databricks
Drive best practices across Spark, Python, and SQL
Lead and mentor data engineers across projects
Work closely with stakeholders to define solutions
Optimize performance, cost, and reliability of pipelines
Implement data governance, quality, and monitoring
Requirements:
What were looking for:
5+ years of experience as a data Engineer
Proven experience leading large data projects
Strong hands-on experience with Databricks (must)
Deep knowledge of Spark, Python, and SQL
Experience designing lakehouse architectures
Strong understanding of batch and streaming pipelines
Experience with data modeling and large-scale data processing
Ability to translate business needs into technical solutions
This position is open to all candidates.
 
Hide
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8614034
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
דרושים בOne DatAI
Location: Petah Tikva
Job Type: Full Time
Were a looking for a  data Engineer who loves building scalable data pipelines and working with cutting-edge technologies.

:What youll do
Build and own end-to-end data pipelines
Ingest data from multiple sources (APIs, files, databases)
Develop and optimize pipelines with Spark
Write clean, efficient  Python  & SQL code
Requirements:
:What were looking for
3+ years of experience as a data Engineer
Strong experience with Databricks (must)
Solid knowledge of  Python, Spark & SQL
Familiarity with modern lakehouse architectures
Experience with streaming / incremental pipelines
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8614048
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
משרה בלעדית
2 ימים
דרושים ביוביטק סולושנס
Job Type: Full Time
We are looking for a hands-on data Tech Lead to build and grow our Azure data practice from the ground up.
This role combines deep technical expertise with business impact and ownership.
What youll do:
Lead the end-to-end data domain within the company
Design and implement modern data platforms on Azure
Work with technologies such as Microsoft Fabric, Azure data Factory, Azure SQL, Synapse, Power BI
Drive customer engagements - from pre-sale and architecture to hands-on delivery
Act as a trusted advisor for customers on data strategy, analytics, and AI-driven insights
Build best practices, methodologies, and scalable solutions
Requirements:
What were looking for:
Strong hands-on experience with Azure data services
Experience building data pipelines, data warehouses/lakehouses, and BI solutions
Ability to work directly with customers (both technical and business stakeholders)
Experience in pre-sales / solution design - big advantage
Entrepreneurial mindset - ability to build a domain, not just execute tasks
Why join us:
Opportunity to build a new domain from scratch
High ownership and real impact
Work with cutting-edge Azure and AI technologies
Small, strong team with a startup mindset
The position is open to both women and men
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8590082
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are looking for a highly skilled Data Engineer . to build and maintain robust, scalable data pipelines and data marts acting as the connective tissue for intelligence insights generation that serves executive stakeholders , internal customers and 3rd party
The Fintech AI & Data group is looking for a staff Data Engineer to work closely with analysts, data scientists, and software developers and strengthen Fintech by building data capabilities and AI transformation.
Responsibilities:
Gather data needs from internal customers like product and analysts, and translate those requirements into a working database and analytic software.
Design, build, and maintain scalable, reliable batch and real time data pipelines, data marts and warehouse supporting executive dashboards, operational analytics, and internal customer use cases
Ensure high data quality, observability, reliability, and governance across all data assets
Optimize data models for performance, cost-efficiency, and scalability
Develop data-centric software using leading-edge big data technologies.
Build data capabilities that enable automated agentic insights and decision intelligence
Develop reusable data services and APIs that power AI-driven workflows
Evolve our data architecture into an AI-native data layer designed to power LLMs, AI agents, and intelligent applications
Collaborate with analytics, product, and AI teams to translate business needs into scalable data solutions
Influence the software architecture and working procedures for building data and analytics
Work bBe the go-to person for anything and everything regarding understanding the data - exploration, pipelines, analytics, etc. and work both independently and as part of a team
How youll succeed
Have an impact on satisfying customers and reducing financial fraud
Help build the team by hiring the best talent
Contribute toexperiments and research on how to enhance our capabilities
Learn new technologies and methodologies
Collaborate with other data engineers, analysts, data scientists and developers
Be proactive with a self-starter attitude
Be a good listener, while also having strong opinions on what is right
Be fun to be around :)
Requirements:
Bachelors degree in Information Systems, Computer Science or similar
Extensive experience dealing directly with internal customers regarding their data needs
Excellent knowledge of SQL in a large-scale data warehouse or data lakehouse environment such as Spark, Databricks, Presto/Athena/Trino
Experience in designing, building and maintaining highly scalable, robust & fault-tolerant complex data processing pipelines from the ground up (ETL, DB schemas)
Experience with stream processing or near real-time data ingestion
Experience working in cloud environment, preferably AWS (EC2, S3 EMR elastic map)
Excellent knowledge of database / dimensional modeling / data integration tools
Experience writing scripts with languages like Python, and shell scripts in a Linux environment
Can-do attitude, hands-on approach, passionate about data
Preferred :
Some knowledge of Data Science/Machine Learning
Knowledge/Experience with Scala, Java
Knowledge of data visualization tools like Tableau or Qlik Sense
Some knowledge of graph databases
Some experience in Fintech industry, Cyber Security
Working with AI tools and leveraging AI into product development.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8574787
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are looking for a Senior Data Engineer to join our Data Platform team, focused on building and evolving a secure, enterprise-grade Data Lake that powers large-scale global search, indexing, analytics, and AI-driven capabilities.
In this role, you will design and deliver scalable, compliant, and high-performance data pipelines that ingest, transform, and structure massive volumes of sensitive data to support mission-critical discovery and search workloads.
This position is ideal for a senior engineer who combines deep hands-on data engineering expertise with strong architectural thinking, particularly in regulated and security-sensitive environments. You will work closely with Product, Search, Backend, Security, and Data Science teams to ensure data is searchable, governed, reliable, and compliant by design.
Key Responsibilities:
Enterprise Data Lake Architecture:
Design and evolve a secure, scalable Data Lake architecture on AWS.
Define storage layout, partitioning strategies, and data organization optimized for large-scale search and analytics workloads.
Implement ACID-compliant table formats (e.g., Iceberg) to ensure reliability, consistency, and schema evolution.
Design ingestion patterns (batch and streaming) for high-volume, heterogeneous datasets.
Implement lifecycle management, retention policies, and environment isolation.
Global Search & Indexing Enablement:
Design data pipelines that prepare and structure data for global search and indexing systems.
Optimize data models and transformations to support high-performance search queries and distributed indexing.
Collaborate with search and backend teams to ensure efficient data availability and low-latency access patterns.
Support incremental ingestion, change-data-capture (CDC), and near real-time processing where required.
Ensure traceability and reproducibility of indexed datasets.
Secure & Regulated Data Engineering:
Implement strict access controls (IAM), encryption (at rest and in transit), and auditing mechanisms.
Ensure compliance with enterprise security and regulatory requirements.
Design systems with data lineage, traceability, and audit-readiness in mind.
Partner with Security and Compliance teams to support internal and external audits.
Handle sensitive and regulated datasets with strong governance and segregation controls.
Pipeline Development & Platform Engineering:
Build and maintain high-scale ETL/ELT pipelines using Apache Spark (EMR/Glue) and AWS-native services.
Leverage S3, Athena, Kinesis, Lambda, Step Functions, and EKS to support both batch and streaming workloads.
Implement Infrastructure as Code (Terraform / CDK / SAM) for reproducible environments.
Establish observability, monitoring, and SLA management for mission-critical pipelines.
Continuously optimize performance, scalability, and cost efficiency.
Cross-Functional Collaboration:
Work closely with Product Managers to translate global search and discovery requirements into scalable data solutions.
Collaborate with ML and Data Science teams to enable feature extraction and enrichment pipelines.
Contribute to architecture discussions and promote best practices in enterprise data engineering.
Provide documentation and clear technical artifacts for regulated environments.
דרישות:
Technical Expertise:
Strong hands-on experience with Apache Spark (EMR, Glue, PySpark).
Deep experience with AWS data services: S3, EMR, Glue, Athena, Lambda, Step Functions, Kinesis.
Proven experience designing and operating Data Lakes / Lakehouse architectures (Iceberg preferred).
Experience building scalable batch and streaming pipelines for large datasets.
Strong understanding of distributed systems and data modeling for search/indexing use cases.
Experience implementing secure, compliant data architectures (IAM, encryption, auditing).
Infrastructure as Code experience (Terraform / CDK / SAM).
Strong Python skills (TypeScript is a plus).
Enterprise & Search-Oriented Mindset המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600560
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are seeking a Senior Backend & Data Engineer to join its SaaS Data Platform team.
This role offers a unique opportunity to design and build large-scale, high-performance data platforms and backend services that power our cloud-based products.
You will own features end to end-from architecture and design through development and production deployment-while working closely with Data Science, Machine Learning, DevOps, and Product teams.
What Youll Do:
Design, develop, and maintain scalable, secure data platforms and backend services on AWS.
Build batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS.
Develop backend components and data-processing workflows in a cloud-native environment.
Optimize performance, reliability, and observability of data pipelines and backend services.
Collaborate with ML, backend, DevOps, and product teams to deliver data-powered solutions.
Drive best practices, code quality, and technical excellence within the team.
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Tech Stack:
AWS Services: S3, Lambda, Glue, Step Functions, Kinesis, Athena, EMR, Airflow, Iceberg, EKS, SNS/SQS, EventBridge
Languages: Python (Node.js/TypeScript a plus)
Data & Processing: batch & streaming pipelines, distributed computing, serverless architectures, big data workflows
Tooling: CI/CD, GitHub, IaC (Terraform/CDK/SAM), containerized environments, Kubernetes
Observability: CloudWatch, Splunk, Grafana, Datadog
Key Responsibilities:
Design, develop, and maintain scalable, secure backend services and data platforms on AWS
Build and operate batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, and EKS
Develop backend components and data processing workflows in a cloud-native environment
Optimize performance, reliability, and observability of data pipelines and backend services
Collaborate with ML, backend, DevOps, and product teams to deliver data-driven solutions
Lead best practices in code quality, architecture, and technical excellence
Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing).
Requirements:
8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
Proven ability to design and implement scalable backend services and data pipelines
Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
Experience with Infrastructure as Code and automated deployment of data infrastructure
Strong debugging, testing, and performance-tuning skills in agile environments
High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
AWS certifications (Solutions Architect, Data Engineer)
Experience with ML pipelines or AI-driven analytics
Familiarity with data governance, self-service data platforms, or data mesh architectures
Experience with PostgreSQL, DynamoDB, MongoDB
Experience building or consuming high-scale APIs
Background in multi-threaded or distributed system development
Domain experience in cybersecurity, law enforcement, or other regulated industries.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600551
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are seeking a QA Engineer with a strong passion for data quality, performance, and scale to join our Data Platform team.
This role is ideal for a QA professional who enjoys working close to complex data systems, understands large-scale pipelines, and wants to play a key role in shaping the automation and quality strategy of a data engineering organization.
You will act as the primary quality owner for high-volume, mission-critical data platforms, working closely with data engineers, backend developers, and platform teams.
What Youll Do:
Data Quality & Validation:
Design and execute data validation strategies for large-scale batch and streaming pipelines
Ensure data correctness, completeness, freshness, and consistency across the data lake
Define and automate checks for schema changes, data drift, and data quality regressions
Performance & Scalability Testing:
Plan and execute performance and scalability tests for data pipelines and processing jobs
Identify bottlenecks across ingestion, transformation, and querying layers
Partner with engineers to validate performance improvements and prevent regressions
Automation & Infrastructure:
Develop and maintain the data teams QA automation infrastructure
Build reusable testing frameworks and tools tailored for large datasets and pipelines
Integrate automated tests into CI/CD pipelines and production monitoring workflows
Collaboration & Ownership:
Work closely with data engineers, backend developers, and platform engineers throughout the development lifecycle
Act as the sole QA owner within a cross-functional team, driving quality without becoming a bottleneck
Participate in design discussions to ensure testability and observability are built in from the start
Quality Mindset & Communication:
Champion a quality-first culture within the team
Clearly communicate risks, findings, and quality metrics to technical stakeholders
Balance thoroughness with pragmatism in fast-moving, high-scale environments.
Requirements:
Experience:
Proven experience as a QA Engineer, ideally within data-intensive or platform teams
Hands-on experience testing large-scale systems, pipelines, or distributed architectures
Experience working as the sole QA in a cross-functional engineering team.
Technical Skills:
Strong understanding of data pipelines and data lake concepts
Experience validating large datasets and implementing data quality checks
Familiarity with performance and load testing methodologies
Experience building test automation frameworks (Python preferred)
Understanding of CI/CD pipelines and automation best practices.
Mindset & Collaboration:
Passion for data, performance, and technology
Self-driven, independent, and comfortable owning QA end-to-end
Strong communication skills and ability to collaborate across disciplines
Curious, proactive, and eager to learn complex systems.
Nice to Have:
Experience testing big data or analytics platforms
Familiarity with cloud environments (AWS preferred)
Knowledge of Spark, SQL-based analytics, or data processing frameworks
Experience with data observability or data quality tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600532
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
you will be part of our company rem department. our road experience management (rem) is an end-to-end mapping and localization engine. this process leverages advanced algorithms, massive parallelization, and Big Data technologies, creating a highly complex system that demands both a deep understanding of the map creation workflow and strong technical expertise. were looking for a senior data engineer to lead the architecture and development of large-scale, production-grade data pipelines supporting ml inference systems. what will your job look like:
architect and own end-to-end data pipelines for large-scale model inference
design high-throughput, scalable data streaming to the cloud
integrate data conversion into data collection and inference pipelines
drive performance, scalability, and reliability across distributed systems
partner with ml, platform, and infrastructure teams
Requirements:
all you need is: 5+ years of experience as a data engineer in production environments
strong Python expertise
hands-on experience with spark, polars, pandas, duckdb, aws
proven experience designing distributed data architectures
strong understanding of data performance, i/o, and scalability
experience working with ml or inference pipelines .we change the way we drive, from preventing accidents to semi and fully autonomous vehicles. if you are an excellent, bright, hands-on person with a passion to make a difference come to lead the revolution!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8579321
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
We are looking for a DataOps Engineer to own the infrastructure that powers our large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure - you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.
You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you.
RESPONSIBILITIES:
Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
Own the reliability, performance, and cost-efficiency of the data platform - including SLAs, autoscaling, resource quotas, and workload isolation
Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
Develop observability tooling - metrics, logging, alerting, and data quality dashboards - to proactively surface issues across the pipeline stack
Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
Drive platform improvements end-to-end: from design through deployment and ongoing ownership.
Requirements:
5+ years of experience in a production infrastructure, SRE, or DevOps role
Strong Kubernetes experience, autoscaling, resource management, and the broader K8s ecosystem
2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
Proficiency in at least one general-purpose language - Python or Go preferred
Experience with workflow orchestration tools, particularly Apache Airflow
Solid understanding of cloud infrastructure - GCP preferred (GCS, GKE, IAM)
Strong observability skills: metrics pipelines, structured logging, alerting frameworks
OTHER REQUIREMENTS:
Hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production
Familiarity with Delta Lake, Parquet, and columnar storage formats
Experience with data quality frameworks and pipeline lineage tooling
Knowledge of query optimization, partition strategies, and Spark performance tuning
Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8599274
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Ramat Gan
Job Type: Full Time
We are building a centralized intelligence system for the entire organization - a system that collects, consolidates, and analyzes all organizational data, and actively drives business workflows and decision-making.
This role exists to take that system to the next level.
As the Lead Organizational Intelligence Engineer, you will own the evolution of an internal platform that already influences how decisions are made across the company - and turn it into a foundational capability that fundamentally reshapes how the organization operates. Youll work end-to-end: from data ingestion and modeling, through analytics and AI-driven insights, all the way to decision workflows used by executives, R&D, product, and go-to-market teams.
You will operate as a senior individual contributor with broad organizational influence, while laying the technical and product foundations for a future team.
What Youll Do:
Own the architecture and implementation of a company-wide organizational intelligence system, end-to-end
Design and build data pipelines that ingest and unify data from across the organization and into our own platform
Define and evolve data models and semantic layers that create a shared, consistent language across teams and domains
Build advanced analytics and insight mechanisms that move beyond dashboards into decision-enabling workflows
Work directly with company executives to co-define requirements, identify blind spots, and translate strategic questions into systems and capabilities
Apply a product-driven mindset: deciding not just how to build, but increasingly what should be built and why
Leverage frontier AI tooling and agent-based workflows to accelerate development, improve system quality, and expand whats possible
Write and own production-grade code, setting a high bar for quality, reliability, and scalability
Influence technical and organizational workflows across the company to enable deeper intelligence and better decisions
Lay the groundwork for future externalization of these capabilities into customer-facing products
Requirements:
Strong engineering background with hands-on experience building complex data intensive production systems
Deep understanding of data modeling, analytics, and large-scale data processing
Proven ability to design and evolve end-to-end data systems, from ingestion to insight
Comfort operating in environments with high data volumes, distributed systems, and real-time or near-real-time data
Strong analytical thinking - able to reason about data, business questions, and system behavior together
A product-oriented mindset, with experience translating ambiguous needs into concrete systems
Experience working closely with senior stakeholders and influencing decisions across multiple functions
Curiosity and willingness to adopt advanced AI-driven development tools as a core part of how you work
Ability to operate independently, take ownership, and drive large initiatives forward
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8608663
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Ramat Gan
Job Type: Full Time
Required Data Engineer
The pioneer in AI-powered sports content technology, empowers more than 460 clients world-wide to connect with their fans through AI-tailored sports content experiences. Our platform automates the creation, management and distribution of content, enabling sports rights holders to expand reach, grow fan bases, and unlock revenue opportunities across digital platforms.
Why us:
Youll work in an awesome environment alongside some of the most innovative people in the industry, using cutting-edge technologies and tools (video editing, Gen AI, data, etc.). You have the opportunity to directly influence the products and tools used by our clients, including sports giants such as the NBA, Bundesliga, LaLiga, ESPN - and thats just the beginning of what we have to offer! Join us and be a part of the best team in tech as we Fuel the Fandom worldwide.
What youll do:
Design and develop scalable, efficient services, APIs, and automations as founder of this area
Build data pipelines, ETL processes, and integrations that power products and internal tools
Own features end-to-end: design, implementation, CI/CD, monitoring, and observability
Develop and optimize data models, ensuring high-performance data infrastructure
Build and schedule workflows with python orchestrator and integrate with data platforms
Optimize performance, reliability, and security across services
Write high-quality, maintainable code in a rich data environment
Collaborate with Product, R&D, Operations, and Business to deliver impactful data-driven solutions
Work with SAAS environments and containerized deployments.
Requirements:
What youll need:
BSc in Computer Science or an equivalent background
5+ years of professional experience as a Python Developer / Software Engineer
Strong proficiency in Python; experience with frameworks such as FastAPI, Django, or Flask
Solid software engineering fundamentals: OOP, design patterns, testing, debugging
Strong SQL skills and experience with analytical databases (Snowflake or similar)
Experience with REST APIs and asynchronous programming
Proficiency with Git and CI/CD practices
Experience with Docker and/or Kubernetes
Experience with Airflow - advantage
Excellent communication and collaboration skills with the ability to work across multiple stakeholders and business units
Node js - experience is advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8603120
סגור
שירות זה פתוח ללקוחות VIP בלבד