we are a world-leading sports data provider, trusted by sportsbooks worldwide to deliver real-time data with unmatched accuracy and reliability. With technology that drives smarter trading and deeper engagement, we empower bookmakers to grow, innovate, and stay ahead of the game.
If youre passionate about sports and technology and want to make your mark in a fast-moving industry, don't miss this chance! Step onto the field with us and help build the future of sports data We are looking for a talented Software Architect.
What Youll Do:
Define and lead the architecture of complex software systems and platforms, from design to deployment.
Collaborate with cross-functional teams (Data, ML, CV, DevOps) to align architecture with product and business goals.
Design and oversee the development of high-throughput, low-latency services and data pipelines.
Guide the implementation of best practices in software engineering, including system design, scalability, reliability, testing, and monitoring.
Evaluate and adopt technologies (e.g., Apache Iceberg, event-driven architectures, observability platforms) to improve system performance and development velocity.
Mentor engineers and contribute to architectural knowledge sharing across the company.
Requirements: 10+ years of experience in a data engineering role, including 2+ years as a Software Architect with ownership over company-wide architecture decisions.
Proven experience designing and implementing large-scale, Big Data infrastructure from scratch in a cloud-native environment (GCP preferred).
Excellent proficiency in data modeling, including conceptual, logical, and physical modeling for both analytical and real-time use cases.
Strong hands-on experience with:
Data lake and/or warehouse technologies, with Apache Iceberg experience required (e.g., Iceberg, Delta Lake, BigQuery, ClickHouse)
ETL/ELT frameworks and orchestrators (e.g., Airflow, dbt, Dagster)
Real-time streaming technologies (e.g., Kafka, Pub/Sub)
Data observability and quality monitoring solutions
Excellent proficiency in SQL, and in either Python or JavaScript.
Experience designing efficient data extraction and ingestion processes from multiple sources and handling large-scale, high-volume datasets.
Demonstrated ability to build and maintain infrastructure optimized for performance, uptime, and cost, with awareness of AI/ML infrastructure requirements.
Experience working with ML pipelines and AI-enabled data workflows, including support for Generative AI initiatives (e.g., content generation, vector search, model training pipelines) or strong motivation to learn and lead in this space.
Excellent communication skills in English, with the ability to clearly document and explain architectural decisions to technical and non-technical audiences.
Fast learner with strong multitasking abilities; capable of managing several cross-functional initiatives simultaneously.
Willingness to work on-site in Ashkelon once a week.
Bonus Points if you have:
Experience leading POCs and tool selection processes.
Familiarity with Databricks, LLM pipelines, or vector databases is a strong plus.
This position is open to all candidates.