We are looking for a Data Engineering Tech Lead.
What will you be responsible for?
Lead the design and development of scalable, high-performance data workflows, including both batch pipelines and real-time data products.
Define, implement, and enforce engineering best practices related to code quality, testing, CI/CD pipelines, observability, and documentation.
Mentor, support, and grow a team of data engineers, fostering a collaborative and high-performance engineering culture.
Identify opportunities to create new data assets and features that expand product capabilities and value proposition.
Drive architectural decision-making in areas of data modeling, storage solutions, and compute resources within cloud environments such as Databricks and Snowflake.
Collaborate closely with cross-functional stakeholdersincluding Product, DevOps, and R&Dto ensure effective delivery and platform stability.
Promote and champion a data-driven mindset across the organization, balancing technical rigor with business context and strategic goals.
Requirements: Minimum 5 years of hands-on experience designing, building, and maintaining large-scale data pipelines for both batch processing and streaming use cases.
Deep expertise in Python and SQL, with a focus on writing clean, performant, and maintainable code.
Strong analytical and problem-solving skills, with the ability to break down complex technical challenges and align solutions to business objectives.
Solid background in data modeling, analytics, and designing architectures for scalability, performance, and cost efficiency.
Practical experience working with modern OLAP systems and cloud data platforms, including Databricks, Snowflake, or BigQuery.
Familiarity with AI agent protocols (such as A2A, MCP) and LLM-related technologies (e.g., vector databases, embeddings) is a plus.
AI-savvy, with comfort adopting AI tools and staying current with emerging AI trends and technologies.
This position is open to all candidates.