דרושים » תוכנה » Senior Data Engineering

משרות על המפה
 
בדיקת קורות חיים
אבחון און ליין
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
16/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineering
We are looking for a talented and experienced Data Engineer to join our data engineering team. In this role, you will be responsible for designing, implementing, and maintaining our data pipelines and infrastructure. You will collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to ensure the availability, accessibility, and quality of our data assets. Your expertise in data processing, database technologies, and ETL processes will be essential in enabling effective data utilization and analysis.
Responsibilities:
Design, develop, and maintain efficient and scalable data pipelines that extract, transform, and load data from various sources into target systems.
Monitor and optimize data pipelines for performance, reliability, and data quality.
Implement and manage data storage solutions, including relational databases, data warehouses, and NoSQL databases.
Implement data partitioning, indexing, and compression strategies for improved performance and scalability.
Identify and resolve data quality issues, including data cleansing, standardization, and validation.
Identify and implement performance optimization techniques to enhance data processing speed, query performance, and system scalability.
Monitor and analyze data infrastructure and pipeline performance, identifying bottlenecks and implementing optimizations.
Stay updated on emerging technologies and trends in data engineering, proactively identifying opportunities for innovation and improvement.
Requirements:
Relevant Bachelor's degree preferably CS, Engineering/ Information Systems, or other equivalent Software Engineering background.
5+ years of experience as a Data engineer.
Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
Hands-on experience in Python or equivalent programming language
Experience with data warehouse solutions (like Bigquery/ Redshift/ Snowflake or others)
Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
Experience with Airflow and DBT (advantage).
Experience with MLOps (advantage).
Experience with development practices Agile, CI/CD, TDD - Advantage.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7723424
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
פורסם ע"י המעסיק
15/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
A global sports technology hub offering multiple solutions to millions of users worldwide. The company provides Real-Time results, updated stats, original content, aggregated information, customized news feeds, and more. Our products are based on cutting-edge technologies that enable live updates and on-demand content libraries of the highest quality and scale. We are looking for a talented and experienced data ENGINEER to join our data engineering team. In this role, you will be responsible for designing, implementing, and maintaining our data pipelines and infrastructure. You will collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to ensure the availability, accessibility, and quality of our data assets. Your expertise in data processing, database technologies, and ETL processes will be essential in enabling effective data utilization and analysis.
Responsibilities:
* Design, develop, and maintain efficient and scalable data pipelines that extract, transform, and load data from various sources into target systems.
* Monitor and optimize data pipelines for performance, reliability, and data quality.
* Implement and manage data Storage solutions, including relational databases, data warehouses, and NoSQL databases.
* Implement data partitioning, indexing, and compression strategies for improved performance and scalability.
* Identify and resolve data quality issues, including data cleansing, standardization, and validation.
* Identify and implement performance optimization techniques to enhance data processing speed, query performance, and system scalability.
* Monitor and analyze data infrastructure and pipeline performance, identifying bottlenecks and implementing optimizations.
* Stay updated on emerging technologies and trends in data engineering, proactively identifying opportunities for innovation and improvement.
Requirements:
* Relevant Bachelor's degree preferably CS, Engineering/ Information Systems, or other equivalent Software Engineering background.
* 5+ years of experience as a data ENGINEER.
* Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
* Hands-on experience in Python or equivalent programming language
* Experience with data warehouse solutions (like Bigquery/ Redshift/ Snowflake or others)
* Experience with data modeling, data catalog concepts, data formats, data pipelines/ ETL design, implementation, and maintenance.
* Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
* Experience with Airflow and DBT (advantage).
* Experience with MLOps (advantage).
* Experience with development practices Agile, CI/CD, TDD - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7722112
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
20/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers.
Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.
Responsibilities:
Design, implement, and optimize scalable data pipelines for efficient data processing and analysis.
Build and maintain robust data acquisition systems to gather, process, and store data from various sources.
Work closely with DevOps engineers, Data Science and Product teams to understand their requirements and provide data solutions that meet business objectives.
Proactively monitor data pipelines and production environments to identify and resolve issues promptly.
Implement best practices for data security, integrity, and performance.
Mentor and guide junior team members, sharing expertise and fostering their professional development.
Requirements:
6+ years of experience in data or backend engineering, preferably with Python proficiency for data tasks.
Demonstrated experience in designing, developing, and delivering sophisticated data applications
Ability to thrive under pressure, consistently delivering results, and making strategic prioritization decisions in challenging situations.
Hands-on experience with data pipeline orchestration and data processing tools, particularly Apache Airflow and Spark.
Deep experience with public cloud platforms, preferably GCP, and expertise in cloud-based data storage and processing.
Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
Bachelor's degree in Computer Science, Information Technology, or a related field or equivalent experience.
Advantage:
Familiarity with data science tools and libraries.
Experience with Docker containers and Kubernetes.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7728200
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
08/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an exceptional data engineer to join our journey to create a data-based platform that will transform the shipping industry.
As a senior data engineer, you will take an instrumental part in laying the foundations and building the data infrastructure in the company, plan and implement the relevant data infrastructure and tools, be in charge of pipelining the companys data and be responsible for the integrity of the data as we scale up.
Responsibilities:
Contribute to the ongoing management of the data team platform including platform enhancements, capacity management, performance monitoring, troubleshooting and resolution of technical issues
Designing and optimizing data storage solutions, including data warehouses and data lakes.
Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
Developing and maintaining data pipelines for efficient extraction, transformation, and loading (ETL) processes.
Participate in proofs-of-concept on platform innovation, and effectively transition and scale those concepts into production at scale through, engineering, deployment and commercialization.
Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners.
Requirements:
BSc or MSc degree in Computer Engineering, Computer Science or a related discipline
8+ years of experience in data/ software engineering within a production environment
Strong programming skills (e.g., Python,, SQL).
Familiarity with Docker, Kubernetes, and cloud services (AWS, GCP)
Experience with SQL/NoSQL databases (e.g., Redshift, Postgres, MongoDB, Spark)
Proficiency in data modeling and database management.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7715298
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
3 ימים
חברה חסויה
Location: Tel Aviv-Yafo and Ramat Gan
Job Type: Full Time
Required Senior Data Engineer
About the role:
You will specialize in designing and building a world class, scalable data architectures, ensuring reliable data flow and integration for groundbreaking biotechnological research. Your expertise in big data tools and pipelines will accelerate our ability to derive actionable insights from complex datasets, driving innovations in improving patients outcome and in delivering life savings treatment solutions.
In this role, you will work closely with data scientists, analysts, and other cross-functional teams to understand their data needs and requirements. You will also be responsible for ensuring that data is easily accessible and can be used to support data-driven decision making.
Location: Tel Aviv
What will you do?
Design, build, and maintain data pipelines to extract, transform, and load data from various sources, including databases, APIs, and flat files
Enhance our data warehouse system to dynamically support multiple analytics use cases
Reimplement and productise scientific computational methods
Implement data governance policies and procedures to ensure data quality, security, and privacy
Collaborate with data scientists and other cross-functional teams to understand their data needs and requirements
Develop and maintain documentation for data pipelines, processes, and systems.
Requirements:
We will only consider data engineers with strong coding skills with an extensive background in data orchestration, data warehousing and ETL tools.
Required qualifications:
Bachelor's or Master's degree in a related field (e.g. computer science, data science, engineering, computational biology)
At least 5 years of experience with programming languages, specifically Python
Must have at least 3+ years of experience as a Data Engineer, ideally with experience in multiple data ecosystems
Proficiency in SQL and experience with database technologies (e.g. MySQL, PostgreSQL, Oracle)
Familiarity with data storage technologies (e.g. HDFS, NoSQL databases)
Experience with ETL tools (e.g. Apache Beam, Apache Spark)
Experience with orchestration tools (e.g. Apache Airflow, Dagster)
Experience with data warehousing technologies (ideally BigQuery)
Experience working with large and complex data sets
Experience working in a cloud environment
Strong problem-solving and communication skills
Familiarity with biotech or healthcare data - an advantage
Desired personal traits:
You want to make an impact on humankind
You prioritize We over I
You enjoy getting things done and striving for excellence
You collaborate effectively with people of diverse backgrounds and cultures
You constantly challenge your own assumptions, pushing for continuous improvement
You have a growth mindset
You make decisions that favor the company, not yourself or your team
You are candid, authentic, and transparent.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7738992
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
22/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer
Full-Time
Tel Aviv
Who are you?
You are a seasoned Data Engineer with a deep understanding of data modeling, massive parallel processing (in both realtime and batch) and bringing Machine learning capabilities into large-scale production systems. You have experience at a cutting edge startup and are passionate about building the data infrastructures that fuels the worlds first intelligent agent. You are a team player with excellent collaboration, communication skills and a can do approach
What youll be doing?
Build, maintain, and scale data pipelines for both batch and real-time data processing across multiple sources and ecosystems.
Design and implement robust APIs and integrate diverse data systems to support data collection and aggregation.
Develop and manage advanced data architectures, including lakehouses, streamhouses, and data warehouses.
Collaborate with data scientists and other stakeholders to implement effective data solutions and integrate large language models (LLMs) into our systems.
Work with cross-functional teams to define business needs and translate them into technical implementations that leverage your deep understanding of data architectures and software engineering best practices.
Develop and lead initiatives to manage, monitor, and debug data systems, enhancing their reliability, efficiency, and overall quality.
Requirements:
3+ years of experience in designing and managing sophisticated lakehouse and data warehouse architectures, ensuring scalable, efficient, and reliable data storage solutions.
3+ years of experience building and maintaining ETLs using Apache Spark.
2+ years of experience working with streaming technologies (e.g., Apache Kafka, Pub/Sub) and implementing real-time data pipelines using Stream processing technologies (e.g., Spark Streaming, Cloud Functions)
3+ years of experience with SQL and distributed query engines such as Presto and Trino, with a strong focus on analyzing and optimizing query plans to develop efficient and complex queries.
2+ years of experience developing APIs using Python, with proficiency in asynchronous programming and task queues.
Proven expertise in deploying and managing Spark applications on enterprise-grade platforms such as Amazon EMR, Kubernetes (K8S), and Google Cloud Dataproc.
Solid understanding of distributed systems and experience with open file formats such as Paimon and Iceberg.
2+ years of experience developing infrastructures that bring machine learning capabilities to production, using solutions such as Kubeflow, Sagemaker and Vertex
3+ years of experience writing production-grade Python code and working with both relational and non-relational databases
Solid understanding of software engineering concepts, design patterns, and best practices, with the ability to architect solutions and integrate different system components.
Proven experience working with unstructured data, complex data sets, and data modeling
Advantage Demonstrated experience orchestrating containerized applications in AWS and GCP using EKS and GKE.
Advantage Proficiency in Scala and Java.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7731552
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
20/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an adept Senior Data Engineer with a passion for tackling complex challenges across a diverse range of technologies.
Your role will involve a deep commitment to software design, code quality, and performance optimization.
As part of our Engineering team, your mission will be to empower critical infrastructure by enabling the detection, investigation, and response to complex attacks and data breaches on their networks.
You will play a pivotal role in developing pipelines to efficiently extract, transform, and load massive volumes of data.
Your expertise will contribute to the creation of a scalable, high-performance data lake that serves as a foundation for other services within the platform. Additionally, you will be responsible for translating intricate requirements into meticulous and actionable designs.
Responsibilities:
Be a significant part of the development of data pipelines to efficiently extract, transform, and load vast volumes of data.
Architect and build a scalable, high-performance data lake that supports various services within the platform.
Translate intricate requirements into meticulous design plans, maintaining a focus on software design, code quality, and performance.
Collaborate with cross-functional teams to implement data-warehousing and data-modeling techniques.
Apply your expertise in Core Linux, SQL, and scripting languages to create robust solutions.
Leverage your proficiency in cloud platforms such as AWS, GCP, or Azure to drive strong data engineering practices.
Utilize your experience with streaming frameworks, such as Kafka, to handle real-time data processing.
Employ your familiarity with industry-standard visualization and analytics tools, like Tableau and R, to provide insightful data representations.
Demonstrate strong debugging skills, identifying issues such as race conditions and memory leaks.
Solve complex problems with an analytical mindset and contribute to a positive team dynamic.
Bring your excellent interpersonal skills to foster collaboration and maintain a positive attitude within the team.
Requirements:
5+ years of experience in developing large-scale cloud systems.
Proficiency in Core Linux, SQL, and at least one scripting language.
Strong data engineering skills with expertise in cloud platforms like AWS, GCP, or Azure.
Expertise in developing pipelines for ETL processes, handling extensive data loads.
Familiarity with streaming frameworks, such as Kafka, or similar technologies.
Knowledge of data-warehousing and data-modeling techniques.
Practical experience with industry-wide visualization and analytics tools such as Tableau, R, etc.
Strong understanding of operating system concepts.
Proven ability to diagnose and address issues like race conditions and memory leaks.
Adept problem solver with analytical thinking abilities.
Outstanding interpersonal skills and a positive attitude.
Demonstrated ability to collaborate effectively within a team.
Advantages:
Previous experience working on-premises solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7727785
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
22/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are currently seeking a Data Architect to join our fast-paced company with a success driven culture. Our Data team specialize in delivering modern data processing and analytics solutions on top of the major public clouds. We are working in a multi-cloud environment on a wide variety of projects for leading customers which include successful startups and key enterprises.
As a data architect you will collaborate with our customers to design data analytics solutions, focus on delivering the right solution and drive feature innovation based upon customer needs.
Job Responsibilities:
Architect data solutions & platforms on major public clouds, technical design, and development
Design, Develop and Operate data pipelines for extraction, transformation, and loading (ETL/ELT) of data from a variety of data sources.
Expert in writing efficient SQL queries and optimize performance
Write Technical Designs and turn business requirements to technical solutions
Team player with a can-do attitude
Partner with analysts and data scientists to help ship projects into production grade systems.
Be a focal point for data externalization, and develop advanced dashboards.
Work with analysts and Software Engineers to develop data pipelines for highly complex processes using a combination of HFDS like file systems, Spark/Flink, Presto, Kafka and more.
Requirements:
Bachelors degree or equivalent practical experience
Excellent communication skills including the ability to identify and communicate data driven insights
2+ years of hands-on experience in DevOps tools and practices (e.g CI/CD, IaaC, containers, etc. ) Must
2+ years of hands-on experience in Scala/Java/Python Must
2+ years of hands-on experience writing complex ETL pipelines using parallel processing engine (e.g. Apache Spark/Apache Beam etc) Must
Experience with Cloud Data Lakes
Experience with data warehousing platforms (Snowflake/Redshift/BigQuery/Azure Synapse Analytics)
Experience with data manipulation via scripting and coding (SQL, Python)
Experience with at least one of the major public cloud platforms (AWS/GCP/Azure)
Experience with NoSQL database/s of the following: Elasticsearch, Redis, MongoDB, Cassandra or commercial equivalents
Advanced Qualifications:
Experience with orchestration tools like Airflow, Prefect
Experience with stateful stream processing
Experience with dimensional data modeling or other schema design practices for Data Warehouses
Experience with data modeling and reporting
Experience with Kubernetes/Kubeflow
Experience in translating business needs to data tables and reports.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7731557
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
16/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an exceptional Data Engineering Team Lead who is passionate about architecting, building, and maintaining data infrastructure from the ground up.
Responsibilities:
Build and maintain the companys central data platform.
Own the data architecture: ETL, Pipelines, Databases, Data integrations, and more.
Continuously maintain a healthy and up-to-date data tech stack, guaranteeing scalability and robustness.
Understand the organization's data needs and provide solutions to other data stakeholders, such as data analysts, engineers and product managers.
Produce state-of-the-art data engineering development methodologies, including code reviews, monitoring, tests (unit, component), infra as code, ci/cd, and more.
Provide expert-level technical leadership and mentoring to the team.
Collaborate with Analysts and Business Stakeholders on data requirements and functional specifications.
Establish data infrastructure for the Data Science team.
Requirements:
5+ years of hands-on experience as a BI Developer/Data Engineer- a must
2+ years of managerial experience- a must
BA/BSc in a related field (such as CS, engineering, information systems) or equivalent.
High proficiency in SQL.
High proficiency in Python- a Must
Experience working with cloud DBs (such as Snowflake or Redshift).
Hands-on experience with developing end-to-end ETL/ELT processes.
Proven experience with data warehousing, modeling paradigms and architectures, proficient in DWH methodologies and best practices.
Experience with data visualization tools such as Tableau or Power BI Advantage
Strong analytical skills and attention to detail.
Comfortable working independently and leading projects from end to end.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7723299
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
20/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for brilliant data engineers to join our rapidly growing team.
You will join a team consisting of senior software and data engineers that drive our data platform from data acquisition, to processing and enrichment and all the way to business insights.
You will join an early stage team and company and will have a major impact on the decisions and architecture of our various products.
Responsibilities:
Design and build data acquisition pipelines that acquire, clean, and structure large datasets to form the basis of our data platform and IP
Design and build data pipelines integrating many different data sources and forms
Define architecture, evaluate tools and open source projects to use within our environment
Develop and maintain features in production to serve our customers
Collaborate with product managers, data scientists, data analysts and full-stack engineers to deliver our product to top tier retail customers
Take a leading part in the development of our enterprise-grade technology platform and ecosystem
Harmonize and clean large datasets from different sources.
Requirements:
At least 6 years of experience in software engineering in Python or an equivalent language
At least 3 years of experience with data engineering products from early-stage concept to production rollouts
Experience with cloud platforms (GCP, AWS, Azure), working on production payloads at large scale and complexity
Hands-on experience with data pipeline building and tools (Luigi, Airflow etc), specifically on cloud infrastructure
Advantage: Hands-on experience with relevant data analysis tools (Jupyter notebooks, Anaconda etc)
Advantage: Hands-on experience with data science tools, packages, and frameworks
Advantage: Hands-on experience with ETL Flows
Advantage: Hands-on experience with Docker / Kubernetes.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7728201
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
20/05/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineering Team Lead to join our in-house infrastructure group and play a pivotal role in contributing to the companys on-going success Come join us!
What You Will Do:
First, you will design a data pipeline architecture for obtaining, storing, ingesting, and routing large-scale amounts of data from various sources worldwide, both proprietary and commercial, in a distributed manner.
You will handle real time and vast data feeds which need to be accurately timestamped and recorded without congesting the network.
You will then lead a team of engineers to implement, maintain and consistently enhance the data pipeline, evaluate different data providers based on accuracy, cost, and richness, keep track of changes, monitor the quality of data sources, and suggest new ways to optimize storage and reduce latency.
Your Impact:
Conceiving innovative and novel ideas in all stages of the ETL pipeline.
End-to-end ownership POC, design, development cycles, deployment, and support.
Creative thinking using state of the art tech stack.
Requirements:
Solid programming foundation (e.g. data structure and algorithm, performance, paradigm, revision control, CI/CD, testing).
Proven experience in designing and building a large-scale data pipeline
BSc in Computer Science or equivalent software engineering fundamentals (Autodidact, Military experience)
Can-do mentality, intellectual curiosity, self-motivation, and ability to communicate within and across teams.
5+ years experience with modern C++ (C++11 /14 /17).
Experienced with Linux and scripting (python and/or bash)
Knowledge and understanding of communication protocols in different layers (TCP, HTTPS and more)
Preferred Qualifications:
A passion for capital markets
Experience with Spark, Hadoop, Snowflake or similar
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7727451
סגור
שירות זה פתוח ללקוחות VIP בלבד