We're in search of an experienced and skilled Senior Data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives.
Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud.
Your Arena
Design, develop, and maintain scalable, robust data pipelines and ETL processes
Architect and implement complex data models across various storage solutions
Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
Ensure data quality, consistency, security, and compliance across all data systems
Play a key role in defining and implementing data strategies that drive business value
Contribute to the continuous improvement of our data architecture and processes
Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data-related questions and challenges
Participate in and sometimes lead code reviews to maintain high coding standards
Troubleshoot and resolve complex data-related issues in production environments
Evaluate and recommend new technologies and methodologies to improve our data infrastructure.
Requirements: What It Takes - Must haves::
5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles - Must
Extensive experience with AWS, GCP, Azure and cloud-native architectures - Must
Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases - Must
Designing and implementing data warehouses and data lakes - Must
Strong understanding of data modeling techniques - Must
Expertise in data manipulation libraries (e.g., Pandas) and big data processing frameworks - Must
Experience with data validation tools such as Pydantic & Great Expectations - Must
Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests - Must
Nice-to-Haves:
Apache Iceberg - Experience building, managing and maintaining Iceberg lakehouse architecture with S3 storage and AWS Glue catalog - Strong Advantage
Apache Spark - Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing - Strong Advantage
Modern data stack tools - DBT, DuckDB, Dagster or any other Data orchestration tool (e.g., Apache Airflow, Prefect) - Advantage
Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka - Advantage
Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) - Advantage
Stream processing technologies (e.g., Apache Kafka, Apache Flink) - Advantage
Understanding of compliance requirements (e.g., GDPR, CCPA) - Advantage
Experience mentoring junior engineers or leading small project teams
Excellent communication skills with the ability to explain complex technical concepts to various audiences
Demonstrated ability to work independently and lead technical initiatives
Relevant certifications in cloud platforms or data technologies.
This position is open to all candidates.