Were looking for a hands-on Individual Contributor Data Engineer to design, build, and operate large-scale data products. Youll own mission-critical pipelines and services, balancing pre-computation with on-demand execution to deliver complex, business-critical insights with the right cost, latency, and reliability.
RESPONSIBILITIES:
Design and run Spark data pipelines, orchestrated with Airflow, governed with Unity Catalog.
Build scalable batch and on-demand data products, aiming for the sweet spot between pre-compute and on-demand for complex logic - owning SLAs/SLOs, cost, and performance.
Implement robust data quality, lineage, and observability across pipelines.
Contribute to the architecture and scaling of our Export Center for off-platform report generation and delivery.
Partner with Product, Analytics, and Backend to turn requirements into resilient data systems.
Requirements: BSc degree in Computer Science or an equivalent
5+ years of professional Backend/Data-Engineering experience
2+ years of Data-Engineering experience
Production experience with Apache Spark, Airflow, Databricks, and Unity Catalog.
Strong SQL and one of Python/Scala; solid data modeling and performance tuning chops.
Proven track record building large-scale (multi-team, multi-tenant) data pipelines and services.
Pragmatic approach to cost/latency trade-offs, caching, and storage formats.
Experience shipping reporting/exports pipelines and integrating with downstream delivery channels.
IC mindset: you lead through design, code, and collaboration (no direct reports).
OTHER REQUIREMENTS:
Delta Lake, query optimization, and workload management experience.
Observability stacks (e.g., metrics, logging, data quality frameworks).
GCS or other major cloud provider experience.
Terraform IAC experience.
This position is open to all candidates.