Our Senior Data Engineer will play an essential role by building the underlying infrastructures, collecting, storing, processing and analyzing large sets of data, while collaborating with researchers, architects, and engineers, in order to design and build high-quality data processing for our flows.
In this role, you are responsible for end-to-end development of the data pipeline and data models, working with major data flow that includes structured and unstructured data. You will also hold responsibility for operating parts of our production system. Your focus will be on developing and integrating systems that retrieve and analyzing data that influence people's lives. This role for our Tel Aviv office is a hybrid role working at least two days per week in the office.
Our Technologies stack: Python, Spark, Airflow, DBT, Kafka, AWS, Docker, Kubernetes, MongoDB, Redis, Postgres, Elasticsearch, and more.
The ideal candidate will be:
A technology enthusiast who loves data and get shiver excitement from tech innovations
Desire to know how things work and a greater desire to improve them
Intellectual curiosity to find unusual ways to solve problems
Comfortable taking on challenges and learning new technologies
Comfortable working in a fast-paced dynamic environment
Requirements: Bachelors Degree in Computer Science or related field
6+ years experience of designing and implementing server-side Data solutions
Proven experience in creating and optimizing big data processes, pipelines and architectures
Proven experience with AWS ecosystem
Proven experience with Kubernetes in Production
Experience with at least one high-level system language (e.g. Python, Scala)
Experience working on production grade projects
Familiarity with Python
Familiarity with Apache Airflow - Advantage
Familiarity with distributed systems and technologies (Spark, Hadoop, MapReduce) - Advantage
This position is open to all candidates.