We are looking for a brilliant, quick-learner Data Engineer for our data engineering team - an independent logical thinker who understands the importance of data structuring for macro-business decisions.
The position combines high technical skills with business orientation, working closely with the analysts and the R&D team, and affecting the company's cross-departments decisions directly. Our Data Engineer should have the ability to speak in technical and practical terms, and more importantly, lead from one to the other, while dealing with challenges and creating them - to make our team even better than it is.
Roles and Responsibilities:
Creating and structuring end-to-end data pipelines & ETLs: from the source all the way to the analysts hands, enabling them the ideal conditions to make smart and data driven business decisions.
Cracking top industry data challenges while initiating and building creative technical solutions - in house Device Graph, Server-to-Server to multiple systems, Privacy challenges, Online-to-Offline and more.
Deep understanding of the business needs, technical requirements and the companys roadmap - and translating it into custom made data solutions and scalable products.
Craft code following best practices to ensure efficiency, while integrating CI/CD principles.
Writing multi-step scalable processes from more than 50 data sources - Marketing. Operations, CS, Product, CRM and more... tying them up to a valuable & useful source of insights for the analysts.
Understanding data challenges and weaknesses, and managing high standards monitoring and reliability processes.
Requirements: B.A / B.Sc degree in a highly quantitative field - a must.
4-5 years hands on experience as a Data Engineer querying data warehouses (SQL), structuring data processes using quantitative techniques - a must.
Fast learner with high attention to details, and proven ability to multitask on several projects at a time. - a must.
Practical experience in Python and Infrastructure in data context. a must.
High analytical skills and the ability to deep dive into details. a must.
Google Cloud Data tools (BigQuery, Cloud Composer/Airflow) - a must.
AutoML, DataFlow, Neo4J - a plus.
Practical experience with distributed data processing like Spark - a plus.
Experienced in analyzing data, and gaining insights - a plus.
Experience working for a data driven company on a large scale - a plus.
This position is open to all candidates.