We are seeking an experienced and passionate BI Developer with a strong background in Data Analysis to join our growing data team. In this role, you will create scalable and reliable BI solutions, supporting our analysts, product, and operations teams with validated and actionable insights.
You will play a key role in developing a robust data infrastructure, building intuitive dashboards, and ensuring data integrity across all reporting layers. You will thrive in this position if you enjoy solving complex problems, simplifying intricate data pipelines, and translating business needs into efficient data models and visualizations.
Core Responsibilities:
Dashboard & Reporting Development Design, implement, and iterate KPI‑oriented Tableau dashboards, workbooks, and scheduled extracts that expose product, growth, and engagement signals.
Data Quality & Governance Deploy validation tests, reconciliation queries, and lineage tracking to guarantee accuracy, consistency, and traceability across curated datasets and published metrics.
ETL / ELT Engineering Partner with Data Engineering to architect, orchestrate, and tune batch and streaming pipelines. Hands‑on experience with Amazon Redshift and the Azkaban scheduler is a significant advantage.
Semantic‑Layer Stewardship Own dimensional data models (star schemas), dbt models, and calculated fields to provide a single source of truth with predictable query performance.
Automation & Performance Optimisation Eliminate manual data handling, reduce query latency, and lower compute spend through efficient SQL, materialised views, incremental loads, and caching strategies.
Stakeholder Enablement Translate analytical requirements from Product, Marketing, Operations, and Executive teams into scalable, self‑service solutions; author documentation and deliver enablement sessions on metric definitions and Tableau best‑practices.
Requirements: 5+ years as a BI Developer, Analytics Engineer, or Senior Data Analyst in a data‑intensive, high‑growth environment.
Deep Tableau expertise (LOD expressions, VizQL performance tuning, Server/Cloud administration).
Advanced SQL proficiency (window functions, CTEs, optimiser hints, partitioning, clustering).
Proven experience designing and maintaining production‑grade /ELT pipelines.
Professional‑level English communication skillsable to articulate complex technical concepts and data‑driven narratives to diverse audiences.
Practical exposure to Amazon Redshift and workflow orchestration with Azkaban Strong advantage
Python (pandas) for data transformation, automation, and advanced analytics Strong advantage
Familiarity with data catalogue, lineage, and monitoring tooling (e.g., DataHub, Monte Carlo, Great Expectations) Strong advantage
This position is open to all candidates.