We are looking for a savvy Senior Data Engineer to join our growing team of data experts. The hire will be responsible for migrating to the cloud, optimizing our customers databases and data flows, and enriching our operational and functional data flows with AI/ML algorithms.
The ideal candidate is not afraid of data in any form or scale, and is experienced with cloud services to ingest, stream, store, and manipulate data. The Data Engineer will support new system designs and migrate existing ones, working closely with solutions architects, project managers, and data scientists. The candidate must be self-directed, a fast learner, and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or re-designing our customers data architecture to support their next generation of products, data initiatives, and machine learning systems.
Summary of Key Responsibilities:
To meet compliance and regulatory requirements, keep our customers data separated and secure.
Design, Build, and operate the infrastructure required for optimal data extraction, transformation, and loading from a wide variety of data sources using SQL, cloud migration tools, and big data technologies.
Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operational problems.
Design, build, and operate large, complex data lakes that meet functional / non-functional business requirements.
Optimize various data types' ingestion, storage, processing, and retrieval, from near real-time events and IoT to unstructured data such as images, audio, video, documents, and in between.
Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
Requirements: 5+ years of experience in a Data Engineer role in a cloud native ecosystem.
3+ years of experience in AWS Data Services (mandatory)
Bachelor's (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field.
Working experience with the following technologies/tools:
big data tools: Spark, ElasticSearch, Kafka, Kinesis etc.
Relational SQL and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra.
Functional and scripting languages: Python, Java, Scala, etc.
Advanced SQL
Experience building and optimizing big data pipelines, architectures and data sets.
Working knowledge of message queuing, stream processing, and highly scalable big data stores.
Experience supporting and working with external customers in a dynamic environment.
Articulate with great communication and presentation skills
Team player who can train as well as learn from others.
Fluency in Hebrew and English is essential
This position is open to all candidates.