Required Senior Data Engineer
Job Summary:
A fast-growing cybersecurity startup redefining how organizations protect themselves. Our Data Team is at the heart of that mission - powering everything from security insights to customer-facing intelligence.
As a Senior Data Engineer, you'll help build our next-generation data platform from the ground up. We're expanding fast in scope, infrastructure, and tools, and our work is critical to our success every single day. You'll design scalable systems, develop in-house tools, and transform massive amounts of data into meaningful impact.
Responsibilities:
Architect and build scalable, production-grade data pipelines and infrastructure to support analytics and product features.
Develop and own key components of our new data platform, ensuring reliability, performance, and scalability.
Collaborate closely with security researchers, data analysts, and product teams to transform innovative cybersecurity ideas into production-ready data solutions.
Design and implement internal software and tooling to support data team workflows - driving structure, maintainability, and engineering best practices.
Champion best practices in data quality, governance, and observability, ensuring our data systems remain robust and trustworthy at scale.
Requirements: BS or MS in Computer Science or a related technical field.
5+ years of experience as a Software Developer or Data Engineer.
5+ years of hands-on Python development experience as part of an engineering team.
Extensive experience with different database technologies (SQL and NoSQL), with a deep understanding of data modeling, relationships, constraints, and data types.
Hands-on experience building and deploying production-grade data pipelines in cloud environments (GCP preferred), with full lifecycle ownership - from development to production.
Hands-on experience developing, testing, and deploying production-grade applications - from coding and reviews to CI/CD, containerization, and production operations.
Hands-on experience with Kubernetes, including deploying and managing applications with Helm and configuring production-ready environments.
Solid understanding of modern data lake and data warehouse concepts, including the separation of compute, storage, and metadata layers. Hands-on experience with Trino + Iceberg or similar architectures is a strong advantage.
This position is open to all candidates.