We are looking for a Data Platform Engineer.
We work in a flexible, hybrid model, so you can choose the home-office balance that works best for you.
Key Responsibilities:
Design and implement scalable data processing pipelines using Azure Databricks and other modern data platforms to power real-time analytics and insights.
Push boundaries by pioneering new technologies and stretching existing ones to their limitsbe a trailblazer in data innovation.
Design, build, and maintain scalable data processing and analytics infrastructure and pipelines, optimizing for performance and cost efficiency. 
Collaborate cross-functionally with software engineering, product, and research teams to understand data needs and ensure data solutions meet business requirements.
Engage with strategic customers, ensuring our data solutions meet high standards and deliver measurable value in real-world environments.
Requirements:  4+ years of experience in data engineering, including cloud-based data solutions.
Proven expertise in implementing large-scale data solutions. 
Proficiency in Python, SQL, PySpark is a plus.
Experience with ETL processes.
Experience with cloud and technologies such as Databricks (Apache Spark) and Azure Data factory.
Strong analytical and problem-solving skills, with the ability to evaluate and interpret complex data.
Experience leading and designing data solutions end-to-end, integrating with multiple teams, and driving tasks to completion.
Advantages :  
Familiarity with either On-premise or Cloud storage systems
Excellent communication and collaboration skills, with the ability to work effectively in a multidisciplinary team.
This position is open to all candidates.