we are looking for a Big Data Software Engineer.
You will join the Traffic Team, a core engineering team operating at the heart of the company's measurement system.
You will Build and maintain high-throughput streaming systems processing 100B+ daily events.
Tackle performance and optimization challenges that make interview questions actually relevant
Design and implement real-time data processing pipelines using Kafka, Databricks/Spark, and distributed computing
Lead projects end-to-end: design, development, integration, deployment, and production support
Requirements: 5+ years of software development experience with JVM-based languages (Scala, Java, Kotlin) with strong functional programming skills
Strong grasp of Computer Science fundamentals: functional programming paradigms, object-oriented design, data structures, concurrent/distributed systems
Proven experience with high-scale, real-time streaming systems and big data processing.
Experience and deep understanding of a wide array of technologies, including:
Stream processing: Kafka, Kafka Streams, or similar frameworks (Flink, Spark Streaming, Pulsar).
Concurrency frameworks: Akka, Pekko, or equivalent actor systems/reactive programming.
Data platforms: Databricks, Spark, Delta Lake, or similar lakehouse technologies.
Microservices & containerization: Docker, Kubernetes.
Modern databases: Experience across analytical databases (ClickHouse, Snowflake, BigQuery), NoSQL (Cassandra, MongoDB), and columnar stores
Cloud infrastructure: GCP or AWS.
Hands-on experience developing with AI tools (Cursor, Claude Code, etc..) .
Strong DevOps mindset: CI/CD pipelines (GitLab preferred), infrastructure as code, monitoring/alerting.
BSc in Computer Science or equivalent experience.
Excellent communication skills and ability to collaborate across teams.
Nice to have:
Previous experience in ad-tech.
Experience with schema evolution and data serialization (Avro, Protobuf, Parquet)
This position is open to all candidates.