Required Big Data Engineering Lead
As a Big Data Engineering Lead within our Data & AI Department, you will play a pivotal role in designing, implementing, and optimizing Data & GenAI solutions that empower innovation and enable intelligent decision-making across our organization. In this role, you will lead the design of our future-proof data platform, ensuring it is scalable, open, extensible, high-performing, private & secure. You will also play a pivotal role in designing, implementing, and optimizing GenAI solutions related with data domain that drive innovation and enable intelligent decision-making. You will partner with engineering & business stakeholders to understand their data needs and deliver technological solutions that drive business value, leveraging cutting-edge AI technologies and robust data strategies. This is an opportunity to shape the future of data and AI and make a lasting impact.
As a Big Data Engineering Lead you will...
Lead the design and development of our petabyte-scale Lakehouse and modern data platform, drive architectural decisions, and provide technical leadership to ensure it meets scalability, performance, privacy, and security requirements.
Collaborate closely with top-notch engineers in implementation efforts to ensure alignment with architectural vision, tackle tough problems, and deliver creative solutions.
Provide hands-on expertise through platform development and conduct architecture proof-of-concepts to validate and recommend tools, technologies, and design decisions.
Promote the adoption and utilization of the data platform by collaborating with stakeholders to identify impactful use cases, developing enablement resources, and ensuring the platform delivers measurable business value.
Evaluate and recommend tools and technologies to support & proliferate Data & AI-driven decision-making and make data production and consumption widespread.
Requirements: Bachelors or masters in computer science, Data Science, Machine Learning, or a related field, or equivalent industry experience, is required.
8+ years of experience in data engineering, or a related role, preferably in large-scale, complex environments, including prior experience as a software engineer.
Strong knowledge of big data technologies and data pipeline tools such as Kafka, Airflow, SPARK, ICEBERG, Presto and cloud-native services. Proficiency in programming languages such as Python, Java, or Scala.
Demonstrated success in leading major data initiatives, including building data architectures from the ground up.
Strong understanding of the intersection between software and data engineering, with experience in designing systems to meet complex and evolving data requirements.
Excellent communication, presentation, and stakeholder engagement skills, with the ability to work cross-functionally, convey complex technical concepts, and align diverse stakeholders toward common goals.
Comfortable operating and thriving in unexplored or ambiguous territory, with a high level of independence and self-motivation.
This position is open to all candidates.