דרושים » דאטה » Data Engineer - RT Big Data Systems

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Merkaz
Job Type: Full Time
abra R&D is looking for a Data engineer! We are looking for a Data Engineer for RT Big Data Systems to join the team and design and deploy scalable, standardized, and maintainable data pipelines that enable efficient logging, error handling, and real-time data enrichment. The role requires strong ownership of both implementation and performance The role includes: Optimize Splunk queries and search performance using best practices Build and manage data ingestion pipelines from sources like Kafka, APIs, and log streams Standardize error structures (error codes, severity levels, categories) Create mappings between identifiers such as session ID, user ID, and service/module components Implement real-time data enrichment processes using APIs, databases, or lookups Set up alerting configurations with thresholds, modules, and logic-based routing Collaborate with developers, DevOps, and monitoring teams to unify logging conventions Document flows and ensure traceability across environments
Requirements:
* Minimum 3 years of hands-on experience in Splunk – Mandatory
* Proficient in SPL, data parsing, dashboards, macros, and performance tuning – Mandatory
* Experience working with event-driven systems (e.g., Kafka, REST APIs) – Mandatory
* Deep understanding of structured/semi-structured data (JSON, XML, logs) – Mandatory
* Strong scripting ability with Python or Bash
* Familiar with CI/CD processes using tools like Git and Jenkins
* Experience with data modeling, enrichment logic, and system integration
* Advantage: familiarity with log schema standards (e.g., ECS, CIM)
* Ability to work independently and deliver production-ready, scalable solutions – Mandatory
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304508
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/08/2025
חברה חסויה
Location: Rosh Haayin
Job Type: Full Time
At Ilyon, we build mobile games enjoyed by millions of players around the world. Our decisions are driven by data — from feature development and live ops to user acquisition and monetization. We’re looking for a skilled Data Engineer to help us take our analytics and data infrastructure to the next level. As a Data Engineer, you’ll design and maintain the systems that power insights across the company. You’ll work closely with data analysts, UA managers, and product teams to ensure clean, fast, and scalable access to our most important asset: data. Data Pipeline Development
* Design, build, and maintain scalable, robust ETL/ELT pipelines.
* Ingest data from various sources (APIs, databases, flat files, cloud buckets).
* Automate workflows for batch and/or streaming pipelines (e.g., using Airflow, GCP services).
* Design and organize data for analytics teams in cloud warehouses (BigQuery, Snowflake).
* Implement best practices for partitioning, clustering, and materialized views.
* Manage and optimize data infrastructure (cloud resources, storage, compute).
* Ensure scalability, security, and compliance in data platforms. Data Quality & Governance
* Monitor data integrity, consistency, and accuracy.
* Implement validation, monitoring, and alerting for pipeline health and data accuracy.
* Maintain documentation and data catalogs.
* Troubleshoot failures or performance bottlenecks. Collaboration & Enablement
* Work closely with data analysts, managers, and developers.
* Translate business requirements into technical solutions.
* Support self-service analytics and create reusable datasets.
Requirements:
* 2+ years of experience as a Data Engineer or similar role.
* Strong SQL and Python skills for data manipulation and pipeline logic.
* Experience with Airflow for orchestration and Docker/Kubernetes for deployment.
* Hands-on experience with cloud data platforms (GCP, AWS) and warehouses like BigQuery or Snowflake.
* Knowledge of data modeling, optimization, and performance tuning.
* Familiarity with DAX and BI tools like Power BI or Looker.
* Experience with Kafka or Pub/Sub for real-time data ingestion- an advantage.
* Knowledge of Docker, Kubernetes, and cloud-native tools in GCP- an advantage.
* Experience with Firebase Analytics and Unity Analytics (data structure wise)- an advantage.
Our Tech Stack: Languages : SQL, Python, DAX Orchestration : Airflow, Docker, Kubernetes Data Warehouses : BigQuery, Snowflake Cloud : GCP, AWS
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8290616
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 19 שעות
חברה חסויה
Location: Ra'anana
Job Type: Full Time
The ideal candidate is not afraid of data in any form or scale, and is experienced with cloud services to ingest, stream, store, and manipulate data. The Data Engineer will support new system designs and migrate existing ones, working closely with solutions architects, project managers, and data scientists. The candidate must be self-directed, a fast learner, and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or re-designing our customers data architecture to support their next generation of products, data initiatives, and machine learning systems.

Summary of Key Responsibilities:
To meet compliance and regulatory requirements, keep our customers data separated and secure.
Design, Build, and operate the infrastructure required for optimal data extraction, transformation, and loading from a wide variety of data sources using SQL, cloud migration tools, and big data technologies.
Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operational problems.
Design, build, and operate large, complex data lakes that meet functional / non-functional business requirements.
Optimize various data types' ingestion, storage, processing, and retrieval, from near real-time events and IoT to unstructured data such as images, audio, video, documents, and in between.
Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
Requirements:
5+ years of experience in a Data Engineer role in a cloud native ecosystem.
3+ years of experience in AWS Data Services (mandatory)
Bachelor's (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field.
Working experience with the following technologies/tools:
big data tools: Spark, ElasticSearch, Kafka, Kinesis etc.
Relational SQL and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra.
Functional and scripting languages: Python, Java, Scala, etc.
Advanced SQL.
Experience building and optimizing big data pipelines, architectures and data sets.
Working knowledge of message queuing, stream processing, and highly scalable big data stores.
Experience supporting and working with external customers in a dynamic environment.
Articulate with great communication and presentation skills.
Team player who can train as well as learn from others.
Fluency in Hebrew and English is essential.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8311509
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
14/07/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
At our company, were reinventing DevOps and MLOps to help the worlds greatest companies innovate -- and we want you along for the ride. This is a special place with a unique combination of brilliance, spirit and just all-around great people. Here, if youre willing to do more, your career can take off. And since software plays a central role in everyones lives, youll be part of an important mission. Thousands of customers, including the majority of the Fortune 100, trust our company to manage, accelerate, and secure their software delivery from code to production - a concept we call liquid software. Wouldn't it be amazing if you could join us in our journey?
About the Team
We are seeking a highly skilled Senior Data Engineer to join our company's ML Data Group and help drive the development and optimization of our cutting-edge data infrastructure. As a key member of the company's ML Platform team, you will play an instrumental role in building and evolving our feature store data pipeline, enabling machine learning teams to efficiently access and work with high-quality, real-time data at scale.
In this dynamic, fast-paced environment, you will collaborate with other data professionals to create robust, scalable data solutions. You will be responsible for architecting, designing, and implementing data pipelines that ensure reliable data ingestion, transformation, and storage, ultimately supporting the production of high-performance ML models.
We are looking for data-driven problem-solvers who thrive in ambiguous, fast-moving environments and are passionate about building data systems that empower teams to innovate and scale. We value independent thinkers with a strong sense of ownership, who can take challenges from concept to production while continuously improving our data infrastructure.
As a Data Engineer at our company's ML you will...
Design and implement large-scale batch & streaming data pipelines infrastructure
Build and optimize data workflows for maximum reliability and performance
Develop solutions for real-time data processing and analytics
Implement data consistency checks and quality assurance processes
Design and maintain state management systems for distributed data processing
Take a crucial role in building the group's engineering culture, tools, and methodologies
Define abstractions, methodologies, and coding standards for the entire Data Engineering pipeline.
Requirements:
5+ years of experience as a Software Engineer with focus on data engineering
Expert knowledge in building and maintaining data pipelines at scale
Strong experience with stream/batch processing frameworks (e.g. Apache Spark, Flink)
Profound understanding of message brokers (e.g. Kafka, RabbitMQ)
Experience with data warehousing and lake technologies
Strong Python programming skills and experience building data engineering tools
Experience with designing and maintaining Python SDKs
Proficiency in Java for data processing applications
Understanding of data modeling and optimization techniques
Bonus Points
Experience with ML model deployment and maintenance in production
Knowledge of data governance and compliance requirements
Experience with real-time analytics and processing
Understanding of distributed systems and cloud architectures
Experience with data visualization and lineage tools/frameworks and techniques.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8257535
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Backend Engineer.
What will you be responsible for?
Design and build distributed data systems that are the backbone of our product innovation.
Architect and implement high-throughput data pipelines capable of handling billions of records with speed and reliability.
Develop custom algorithms for deduplication, data merging, and real-time data updates.
Optimize storage, indexing, and retrieval strategies to manage massive datasets efficiently.
Solve deep engineering challenges in distributed computing environments like Spark, EMR, and Databricks.
Build fault-tolerant, highly available data infrastructure with integrated monitoring and observability.
Partner closely with ML engineers, backend developers, and product managers to turn business needs into scalable, production-grade features.
Requirements:
4+ years of hands-on experience in backend or data engineering, with a proven track record of building production-grade systems
Expertise in Python (or Java/Scala) with a deep understanding of data structures, algorithms, and performance trade-offs
Demonstrated experience designing and optimizing large-scale distributed data pipelines using technologies like Apache Spark, EMR, Databricks, Airflow, or Kubernetes
Strong command of a variety of storage engines, including Relational (PostgreSQL, MySQL), Document-based (MongoDB), Time-series / Search (ClickHouse, Elasticsearch), Key-value (Redis)
Familiarity with workflow orchestration tools such as Airflow, Dagster, or Prefect
Hands-on experience with message brokers like Kafka or RabbitMQ, and building event-driven systems
Solid foundation in software engineering best practices, including: CI/CD processes, Automated testing, Monitoring, Scalable system design
Experience in building and launching end-to-end data products that are core to business operations
Comfortable experimenting with AI tools and large language models (LLMs) for automation and data enrichment
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280800
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As a customer-centric tech company, we created an insurance experience that is smart, instant, and delightful.
youll be working with a group of like-minded makers, who get a kick out of moving fast and delivering great products. We surround ourselves with some of the smartest, most motivated, creative people who are filled with positive energy and good karma.
Unlike most publicly traded companies, were nimble and efficient. We take pride in the fact that we still think and operate like a startup. We dont care much about titles and hierarchy and instead focus on innovation, bold moves, and challenging the status quo.
Were built as a lean, data-driven organization that relies on a common understating of objectives and goals to provide teams with autonomy and ownership. We dont like spending our days in meetings and we skip committees altogether. theres no such thing as going over someones head. We have zero tolerance for bureaucracy, office politics, and lean-back personalities.

As a Public Benefit Corporation and a certified B-Corp, we deliver environmental and social impact using our products and tech. Through our Giveback program, we partner with organizations such as the ACLU, New Story, The Humane Society, Malala Fund, American Red Cross, 360.org, charity: water, and dozens of others, and have donated millions towards reforestation, education, animal rights, LGBTQ+ causes, access to water, and more.
Were looking for an experienced Data Engineer to join our DataWarehouse team in TLV.

In this role, you will play a pivotal role in the Data Platform organization, leading the design, development, and maintenance of our data warehouse. In your day-to-day, youll work on data models and Backend BI solutions that empower stakeholders across the company and contribute to informed decision-making processes all while leveraging your extensive experience in business intelligence.

This is an excellent opportunity to be part of establishing state-of-the-art data stack, implementing cutting-edge technologies in a cloud environment.
In this role youll
Lead the design and development of scalable and efficient data warehouse and BI solutions that align with organizational goals and requirements
Utilize advanced data modeling techniques to create robust data structures supporting reporting and analytics needs
Implement ETL/ELT processes to assist in the extraction, transformation, and loading of data from various sources into the semantic layer
Develop processes to enforce schema evaluation, cover anomaly detection, and monitor data completeness and freshness
Identify and address performance bottlenecks within our data warehouse, optimize queries and processes, and enhance data retrieval efficiency
Implement best practices for data warehouse and database performance tuning
Conduct thorough testing of data applications and implement robust validation processes
Collaborate with Data Infra Engineers, Developers, ML Platform Engineers, Data Scientists, Analysts, and Product Managers
Requirements:
3+ years of experience as a BI Engineer or Data Engineer
Proficiency in data modeling, ELT development, and DWH methodologies
SQL expertise and experience working with Snowflake or similar technologies
Prior experience working with DBT
Experience with Python and software development, an advantage
Excellent communication and collaboration skills
Ability to work in an office environment a minimum of 3 days a week
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8297063
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/07/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
As a Big Data & GenAI Engineering Lead within our company's Data & AI Department, you will play a pivotal role in building the data and AI backbone that empowers product innovation and intelligent business decisions. You will lead the design and implementation of our companys next-generation lakehouse architecture, real-time data infrastructure, and GenAI-enriched solutions, helping drive automation, insights, and personalization at scale. In this role, you will architect and optimize our modern data platform while also integrating and operationalizing Generative AI models to support go-to-market use cases. This includes embedding LLMs and vector search into core data workflows, establishing secure and scalable RAG pipelines, and partnering cross-functionally to deliver impactful AI applications.
As a Big Data & GenAI Engineering Lead in our company you will...
Design, lead, and evolve our companys petabyte-scale Lakehouse and modern data platform to meet performance, scalability, privacy, and extensibility goals.
Architect and implement GenAI-powered data solutions, including retrieval-augmented generation (RAG), semantic search, and LLM orchestration frameworks tailored to business and developer use cases.
Partner with product, engineering, and business stakeholders to identify and develop AI-first use cases, such as intelligent assistants, code insights, anomaly detection, and generative reporting.
Integrate open-source and commercial LLMs securely into data products using frameworks such as LangChain, or similar, to augment AI capabilities into data products.
Collaborate closely with engineering teams to drive instrumentation, telemetry capture, and high-quality data pipelines that feed both analytics and GenAI applications.
Provide technical leadership and mentorship to a cross-functional team of data and ML engineers, ensuring adherence to best practices in data and AI engineering.
Lead tool evaluation, architectural PoCs, and decisions on foundational AI/ML tooling (e.g., vector databases, feature stores, orchestration platforms).
Foster platform adoption through enablement resources, shared assets, and developer-facing APIs and SDKs for accessing GenAI capabilities.
Requirements:
8+ years of experience in data engineering, software engineering, or MLOps, with hands-on leadership in designing modern data platforms and distributed systems.
Proven experience implementing GenAI applications or infrastructure (e.g., building RAG pipelines, vector search, or custom LLM integrations).
Deep understanding of big data technologies (Kafka, Spark, Iceberg, Presto, Airflow) and cloud-native data stacks (e.g., AWS, GCP, or Azure).
Proficiency in Python and experience with GenAI frameworks like LangChain, LlamaIndex, or similar.
Familiarity with modern ML toolchains and model lifecycle management (e.g., MLflow, SageMaker, Vertex AI).
Experience deploying scalable and secure AI solutions with proper attention to privacy, hallucination risk, cost management, and model drift.
Ability to operate in ambiguity, lead complex projects across functions, and translate abstract goals into deliverable solutions.
Excellent communication and collaboration skills, with a passion for pushing boundaries in both data and AI domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255562
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/07/2025
Location: Ramat Gan
Job Type: Full Time and Hybrid work
Responsibilities and Duties:

data Integration (60%)
Design, build, and maintain seamless data integrations with customers using Streams, Databases (DBs), and APIs.
Ensure efficient data mapping, transformation, and validation processes.
Optimize integration pipelines for performance and scalability.
Customer Support Issue Resolution (30%)
Troubleshoot and resolve integration or data -related issues for customers.
Act as a trusted technical advisor, helping clients optimize their data processes.
Work closely with Customer Success teams to enhance the onboarding experience and reduce time to value.
Continuous Improvement (10%)
Provide insights to enhance data integration processes, improve system efficiency, and contribute to automation efforts.
Identify opportunities to streamline workflows and implement best practices.
Requirements:
Must-have:

1/2 years max! experience working with SQL databases (MySQL, SQL server, PostgreSQL).
Strong analytical skills with the ability to understand end-to-end (E2E) data processes and optimize workflows.
Excellent teamwork and communication skills to effectively collaborate with multi-functional teams.
Strong troubleshooting and problem-solving skills in data transformation and pipeline issues.
Strong ability to write high-performance, reusable SQL queries across multiple databases.
Degree in Industrial Engineering, Information Systems, or related field.


Advantage:

Russian - speaking is a big advantage
Experience working with large-scale data integration projects.
Experience with MongoDB, JavaScript, Groovy, or API development.
Familiarity with data streaming technologies and cloud-based integrations.
Background in customer-facing technical roles or working closely with CS teams
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8276286
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for our first dedicated Data Engineer a self-motivated and proactive professional with a strong can-do attitude and a sense of ownership. This role involves taking responsibility across all data domains within the company, working closely with our analytics and development teams to build and maintain the data infrastructure that supports business needs. This position is ideal for someone ready to independently lead data engineering efforts and make a meaningful impact.

Responsibilities:
Design, develop, and maintain scalable data pipelines and ETL workflows using tools such as Python, dbt, and Airflow.
Architect and optimize our data warehouse to support efficient analytics, reporting, and business intelligence at scale.
Model and structure data from multiple internal and external sources (such as Salesforce, Jira, Mixpanel, etc.) into clean, reliable, and analytics-ready datasets.
Collaborate closely with our systems architect, analytics, and development teams to translate business requirements into robust and efficient technical data solutions.
Monitor and optimize pipeline performance to ensure data completeness and scalability.
Serve as a key partner and subject-matter expert on all data-related topics within the team.
Implement data quality checks, anomaly detection and validation processes to ensure data reliability.
Requirements:
3+ years of hands-on experience as a Data Engineer or in a similar role.
Expert-level SQL skills, capable of performing complex table transformations and designing efficient data workflows.
Proficiency in Python for data processing and scripting tasks.
Experience building and maintaining ELT/ETL pipelines using dbt.
Hands-on experience with orchestration tools such as Airflow.
Deep understanding of data warehouse concepts and methodologies, including data modeling.
Self-motivated, capable of working autonomously while effectively collaborating with stakeholders to deliver end-to-end solutions.
B.Sc. in Information Systems Engineering, Computer Science, Industrial Engineering, or a related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8304059
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Staff Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Staff Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Staff Algo Data Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8272673
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8274042
סגור
שירות זה פתוח ללקוחות VIP בלבד