The Data Engineer position is a central role in the Tech Org. The Data Engineering (DE) team is a Change Agent Team that plays a significant role in the ongoing (at advanced stages) migration of us to cloud technologies. The ideal candidate is a senior data engineer with a strong technical background in data infrastructure, data architecture design and robust data pipes building. The candidate must also have collaborative abilities to interact effectively with Product managers, Data scientists, Onboarding engineers, and Support staff.
Responsibilities:
Deploy and maintain critical data pipelines in production.
Drive strategic technological initiatives and long-term plans from initial exploration and POC to going live in a hectic production environment.
Design infrastructural data services, coordinating with the Architecture team, R&D teams, Data Scientists, and product managers to build scalable data solutions.
Work in Agile process with Product Managers and other tech teams.
End-to-end responsibility for the development of data crunching and manipulation processes within the company`s product.
Design and implement data pipelines and data marts.
Create data tools for various teams (e.g., onboarding teams) that assist them in building, testing, and optimizing the delivery of the company`s product.
Explore and implement new data technologies to support our data infrastructure.
Work closely with the core data science team to implement and maintain ML features and tools.
Requirements: Requirements:
B.Sc. in Computer Science or equivalent.
4+ years of extensive SQL experience (preferably working in a production environment).
Experience with programming languages (preferably, Python) a must!
Experience working with Snowflake or with Google BigQuery.
Experience with "Big Data" environments, tools, and data modeling (preferably in a production environment).
Strong capability in schema design and data modeling.
Understanding of micro-services architecture.
Working closely with BI developers.
Quick, self-learning, and good problem-solving capabilities.
Good communication skills and collaboration.
Process and detailed oriented.
Passion for solving complex data problems.
Desired:
Familiarity with Airflow, ETL tools, and MSSQL.
Experience with GCP services.
Experience with Docker and Kubernetes.
Experience with PubSub/Kafka.
This position is open to all candidates.