דרושים » תוכנה » senior ML engineer- 2545

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Merkaz
We are looking for a senior ML engineer to join us and built groundbreaking systems designed to handle massive-scale data at an unparalleled magnitude.
In our team we engineer mission-critical solutions that address some of the complex and high-stakes challenges at a national level.
Our unique data poses novel challenges, pushing us to continually innovate and redefine what's possible.
As a Senior ML Engineer you will:
Engineer, design and implement robust, high-performance data-drive pipelines and infrastructure.
Design critical systems for production environments, including observability, monitoring, CI\CD pipeline and resource management.
Requirements:
+3 years of experience in ML Engineer.
Experience n Python and SQL development.
Experience in design and implementation of production-ready systems and data-oriented pipeline.
Familiarity with modern CI\CD development practices and tools.
Familiarity with queuing technologies such as Kafka and RabbitMQ, as well as workflow orchestration tools (e.g. Airflow, Prefect, Flyte)
Familiarity with networking protocols (IP\TCP, UDP and 5-layer model)
Experience in monitoring and orchestration, including familiarity with tools such as Prometheus and Grafana.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8218539
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Herzliya
Job Type: Full Time
We are looking for a skilled and motivated Software Engineer to join our backend data infrastructure team. You will be at the core of data ecosystem, building and maintaining high-performance data services and pipelines that support both real-time and batch workloads. Your work will directly impact how data is accessed and leveraged across the company from live production environments to ML training pipelines. You will design and maintain systems that span hybrid infrastructure (on-prem and cloud), and ensure our data platform is fast, reliable, and scalable. We value engineers who are curious, open-minded, and excited to learn new technologies and practices as the landscape evolves.
As a Big Data Engineer, you will:
Design, implement, and maintain backend services for managing and processing large-scale data.
Build and operate production-grade data pipelines and infrastructure.
Develop utilities ,libraries and services to support high-throughput data retrieval and access patterns.
Ensure observability, stability, and performance of data services in hybrid (cloud/on-prem) environments.
Monitor and troubleshoot issues in live systems and continuously improve their robustness.
Work cross-functionally to ensure data is accessible, well-modeled, and easy to consume by other teams.
Requirements:
Strong programming experience in at least one of the following: C++, Java, Rust, .NET, or Python.
Experience working with python data analytics libraries (such as numpy, pandas, polars).
Experience working on backend services or data-intensive applications.
Understanding of distributed systems, data pipelines, and production monitoring.
Experience in hybrid infrastructure environments (on-prem + cloud).
An open-minded technologist with a willingness to learn and adopt new technologies and best practices.
Nice to Have:
Familiarity with Apache Iceberg or other table/data format technologies (e.g., Delta Lake, Hudi, Parquet, ORC).
Familiarity with Streaming technologies Kafka, Flink.
Experience with orchestration tools like Airflow or Argo.
Exposure to analytics engines (e.g., Spark, DuckDB, Trino).
Knowledge of Kubernetes and containerized deployments.
Experience in MLOps or supporting machine learning pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8218197
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior MLOps Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior MLOps Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior MLOps Engineer Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
To thrive in this role, youll need:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8205356
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
2 ימים
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
we are at the forefront of developing cutting-edge, Real-Time, mission-critical platforms in the fields of energy, utilities, and smart infrastructure. We are looking for a passionate and experienced DevOps Engineer with deep expertise in MLOps Azure DevOps, and ArgoCD, to help drive scalable deployments and intelligent automation in production environments.
Responsibilities:

* Design and maintain CI/CD pipelines in Azure DevOps for both traditional applications and ML workloads.
* Develop and manage MLOps workflows for model training, validation, deployment, and monitoring.
* Automate deployment of services using ArgoCD, Helm, and GitOps best practices.
* Manage and operate production-grade Kubernetes clusters (on-prem and Azure AKS).
* Implement Infrastructure as Code (IaC) using Terraform
* Collaborate closely with data scientists, software engineers, and infrastructure teams to support end-to-end delivery.
* Monitor system performance, availability, and reliability using tools like Prometheus Grafana, and EFK stack
* Ensure best practices in security, version control, logging, and compliance.
Why Join Us:

* Be part of a leading-edge technology company solving real-world challenges in energy and infrastructure.
* Work with a multidisciplinary team of experts in a dynamic and collaborative environment.
* Enjoy continuous professional development.
* Opportunity to work on impactful, large-scale projects with national and global reach.
Requirements:
* 3+ years of experience as a DevOps Engineer in a production environment.
* Proven expertise in Azure DevOps (Pipelines, Repos, Artifacts).
* Proficient in ArgoCD Kubernetes, and container orchestration.
* Experience with Docker Git, and versioning strategies.
* Scripting knowledge in Python Bash, or PowerShell
* Solid understanding of CI/CD, GitOps, and cloud-native architecture.
* Experience working in hybrid (on-prem + cloud) environments.
* Customer-facing role; fluent English (spoken and written) is required
* Basic Network understanding is must Nice to Have
* Experience in OpenShift platforms
* Experience working in AWS
* Familiarity with monitoring/logging solutions: Prometheus Grafana Elasticsearch Kibana
* Background working with data pipelines or Real-Time streaming systems.
* Hands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, Azure ML).
* Azure Certifications (e.g., AZ-400, AZ-104).
* Knowledge in different architecture environments
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8217921
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/05/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
Required Senior GenAI Infrastructure Engineer - ML
About the Team
We empower AI and data science teams at every stage of the model lifecycle, from training to deployment, offering advanced model security and seamless operations across cloud and self-hosted environments.
As we expand, we're looking for a GenAI Infrastructure Engineer to help shape the future of our GenAI platform, trusted by the world's leading companies to power their next-generation GenAI solutions.
In this role, youll build cutting-edge AI infrastructure, design and develop core components such as model serving, prompt management, guardrails, gateways, and evaluation frameworks. Youll help provide organizations with the precise tools to integrate and maintain GenAI into their workflows, ensuring scalability, compliance, and an excellent developer experience.
Youll collaborate with platform, security, AI/ML, and data teams to optimize deployment, monitoring, and governance of GenAI applications at scale. Were looking for data-driven problem-solvers who thrive in fast-paced environments and are passionate about building AI-driven solutions.
If you want to build the infrastructure of the future from concept to production, wed love to hear from you!
As a GenAI Infrastructure Engineer ML you will...
Design and develop the infrastructure of our new GenAI solution.
Develop and manage components such as prompts, guardrails, and model gateways, tracing and evaluation frameworks.
Collaborate with cross-functional teams on shaping how AI applications are structured and managed.
Stay up to date with the latest advancements in GenAI infrastructure, tooling, and frameworks, integrating industry best practices.
Requirements:
5+ years of experience as a Software Engineer
Experienced in designing, developing, and debugging complex, distributed systems (microservices, event-driven)
Proven hands-on experience in containerized environments, microservices and Kubernetes
Experienced with at least one of the main cloud provider platforms (e.g. AWS, GCP)
Experience with designing, structuring, and maintaining Python SDKs
Familiarity with AI application development frameworks and tools (e.g. LangChain, OpenAI, Hugging Face, MLflow)
Experienced in Java and Python
Bonus Points:
Experience working with AI Agents and LLM-based application frameworks
Background in data engineering, feature stores, or vector databases for AI workloads
Previous experience working in developer-first companies or on AI-powered solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8197259
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We're seeking an exceptional Senior Big Data & Machine Learning Engineer who will architect the future of our AI systems and drive technological innovation within our high-performing team.
What you'll be doing
As a key member of our AI team, you will:
Design Scalable Architecture - Architect, implement, and optimize robust ML pipelines that handle massive datasets with elegance and efficiency, from collection through preprocessing to production deployment
Pioneer Technical Solutions - Apply state-of-the-art machine learning technologies to solve complex challenges while maintaining the agility to incorporate emerging innovations
Drive Cross-Functional Excellence - Collaborate strategically with data scientists, software engineers, architects, and product leaders to transform advanced ML solutions into production-ready systems
Requirements:
Expert-level proficiency in Python, Java/Scala with demonstrable production experience
Advanced knowledge of streaming technologies such as Kafka or Kinesis
Mastery of big data processing frameworks including Apache Spark, Trino, Ray, or Dask Comprehensive experience with Data Lake management, including table formats (Iceberg, Delta, Hudi) and data warehouses (Redshift, BigQuery, Snowflake)
Proven expertise in workflow management systems (Airflow, Kubeflow, Argo)
Strong understanding of microservices architecture and event-driven design
Proven experience with huge volumes of data in production environments.
You might also have:
Cloud platform expertise across AWS, Google Cloud, or Azure, with containerization experience (Docker, Kubernetes)
Hands-on experience with ML frameworks including TensorFlow, PyTorch, and Scikit-Learn
MLOps proficiency including model registry management, experiment tracking (MLFlow, W&B), feature store management (Feast, Tecton), and serving platforms (Seldon Core, KServe, Ray Serve, SageMaker)
Experience with Vector Databases and Generative AI/Large Language Models
Basic knowledge of ML algorithms and data analysis techniques
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8206468
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 2 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Infra Tech Lead
A day in the life and how youll make an impact:
Were seeking an experienced and skilled Data Infra Tech Lead to join our Data Infrastructure team and drive the companys data capabilities at scale.
As the company is fast growing, the mission of the data infrastructure team is to ensure the company can manage data at scale efficiently and seamlessly through robust and reliable data infrastructure. As a tech lead, you are required to independently lead the design, development, and optimization of our data infrastructure, collaborating closely with software engineers, data scientists, data engineers, and other key stakeholders. You are expected to own critical initiatives, influence architectural decisions, and mentor engineers to foster a high-performing team.
You will:
Lead the design and development of scalable, reliable, and secure data storage, processing, and access systems.
Define and drive best practices for CI/CD processes, ensuring seamless deployment and automation of data services.
Oversee and optimize our machine learning platform for training, releasing, serving, and monitoring models in production.
Own and develop the company-wide LLM infrastructure, enabling teams to efficiently build and deploy projects leveraging LLM capabilities.
Own the company's feature store, ensuring high-quality, reusable, and consistent features for ML and analytics use cases.
Architect and implement real-time event processing and data enrichment solutions, empowering teams with high-quality, real-time insights.
Partner with cross-functional teams to integrate data and machine learning models into products and services.
Ensure that our data systems are compliant with the data governance requirements of our customers and industry best practices.
Mentor and guide engineers, fostering a culture of innovation, knowledge sharing, and continuous improvement.
Requirements:
7+ years of experience in data infra or backend engineering.
Strong knowledge of data services architecture, and ML Ops.
Experience with cloud-based data infrastructure in the cloud, such as AWS, GCP, or Azure.
Deep experience with SQL and NoSQL databases.
Experience with Data Warehouse technologies such as Snowflake and Databricks.
Proficiency in backend programming languages like Python, NodeJS, or an equivalent.
Proven leadership experience, including mentoring engineers and driving technical initiatives.
Strong communication, collaboration, and stakeholder management skills.
Bonus Points:
Experience leading teams working with serverless technologies like AWS Lambda.
Hands-on experience with TypeScript in backend environments.
Familiarity with Large Language Models (LLMs) and AI infrastructure.
Experience building infrastructure for Data Science and Machine Learning.
Experience collaborating with BI developers and analysts to drive business value.
Expertise in administering and managing Databricks clusters.
Experience with streaming technologies such as Amazon Kinesis and Apache Kafka.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8220200
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We seek an experienced Software Engineer with a strong background to become an integral member of our Data-Core team, tasked with the mission of processing, structuring, and analyzing hundreds of millions of data sources.
Your role will be pivotal in creating a unified, up-to-date, and accurate utilities map, services, and applications for accelerating our mapping operations. Your contributions will directly impact our core product's success.
Responsibilities:
Collaborate with cross-functional teams to design, build, and maintain data processing pipelines while contributing to our common codebase.
Contribute to designing and implementing data architecture, ensuring effective data storage and retrieval.
Develop and optimize complex Python-based applications and services to allow more efficient data processing and orchestration, enhancing the quality and accuracy of our datasets.
Implement geospatial data processing techniques and contribute to the creation of our unified utilities map, enhancing the product's geospatial features.
Drive the scalability and performance optimization of data systems, addressing infrastructure challenges as data volume and complexity grow.
Create and manage data infrastructure components, including ETL workflows, data warehouses and databases, supporting seamless data flow and accessibility.
Design and implement CI/CD processes for data processing, model training, releasing, testing and monitoring, ensuring robustness and consistency.
Requirements:
5+ years of proven experience as a backend/software engineer with a strong Python background.
Experience in deploying a diverse range of cloud-based technologies to support mission-critical projects, including expertise in writing, testing, and deploying code within a Kubernetes environment.
A proven experience in building scalable online services.
Experience with frameworks like Airflow, Docker, and K8S to build data processing and exploration pipelines along with ML infrastructure to power our intelligence.
Experience in AWS/Google cloud environments.
Experience working with both SQL and NoSQL databases such as Postgres, MySQL, Redis, or DynamoDB.
Experience as a Data Infrastructure Engineer or in a similar role in managing and processing large-scale datasets - a significant advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8195498
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a backend engineer with a strong foundation in building scalable, high-performance systems and a deep understanding of cloud infrastructure, distributed systems, and data pipelines. This role focuses on designing and optimizing backend services that support our machine learning (ML) operations and real-time personalization capabilities.
We foster a professional environment where experienced engineers collaborate to drive technical excellence, continuously improving our backend architecture and infrastructure. As a Backend Engineer, you will play a key role in building and maintaining the backend services that power our ML infrastructure, ensuring efficiency, scalability, and reliability.
Role & Responsibilities:
- Design, develop, and optimize backend services that support ML pipelines, APIs, and real-time decision-making systems.
- Architect and implement scalable and reliable data processing workflows, integrating ML models into production environments.
- Build and maintain infrastructure for efficient model deployment, monitoring, and versioning.
- Ensure high availability, performance, and security of backend services.
- Lead initiatives to improve system architecture, reduce technical debt, and enhance development processes.
- Stay up to date with the latest advancements in backend technologies, cloud computing, and distributed systems.
Requirements:
- 4+ years of experience in backend engineering, designing and developing distributed systems.
- Strong proficiency in Python, Java, or Go for backend development.
- Deep experience with cloud platforms (AWS, GCP, or Azure), including compute, storage, and networking services.
- Experience with containerization and orchestration (Docker, Kubernetes).
- Proficiency in designing and managing scalable databases (SQL & NoSQL: MySQL, PostgreSQL, Redis, Cassandra, etc.).
- Hands-on experience with CI/CD pipelines, infrastructure as code (Terraform, CloudFormation), and automated deployments.
-Familiarity with high-performance APIs and microservices architecture.
- Experience working with ML operations (MLOps) and data pipelines is a plus but not required.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8181155
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
25/05/2025
חברה חסויה
Location: Netanya
Job Type: Full Time
DRS RADA is a global pioneer of RADAR systems for active military protection, counter-drone applications, critical infrastructure protection, and border surveillance. Join Our Team as a Senior Data Engineer at DRS RADA Technologies! Job Summary: We are seeking an experienced Senior Data Engineer to join our data engineering team. In this role, you will play a crucial part in designing, developing, and maintaining scalable data pipelines and infrastructure to support our AI department. This is an opportunity to work with cutting-edge technologies in a fast-paced production environment, driving impactful, data-driven solutions for the business. Key Responsibilities:
* Design, develop, and optimize ETL/ELT pipelines for large-scale data processing.
* Work with a modern data stack, including Databricks (Spark, SQL), Apache Airflow Azure services
* Troubleshoot and optimize queries and jobs for performance improvements.
* Implement best practices for data governance, security, and monitoring.
* Stay updated with industry trends and emerging technologies in data engineering.
Requirements:
Required Qualifications: 4+ years of experience in data engineering or related fields.
* Proficiency in Python for data processing and automation.
* Expertise in Apache Airflow for workflow orchestration - Must
* Deep understanding of Apache Spark and Databricks for big data processing.
* Familiarity with cloud-based environments, particularly Azure
* Advanced proficiency in SQL and query optimizations
* Familiarity with data modeling, ETL/ELT principles, and performance tuning.
* Knowledge of CI/CD, containerization (Docker).
* An enthusiastic, fast-learning, team-oriented, and motivated individual who loves working with data. If you're passionate about building scalable data solutions and thrive in a fast-paced environment, we’d love to hear from you!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8054456
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Petah Tikva
Job Type: Full Time
We are interested in welcoming an MLOps Engineer to join our diligent team and design, build, test, document, and debug machine learning infrastructure, following industry and company standards.
Requirements:
Essential:
Prior experience in designing, building, testing, and maintaining machine learning (ML) infrastructure to empower data scientists to rapidly iterate on model development
2+ years relevant experience in developing continuous integration, CI/CD deployment pipelines (e.g., Jenkins, GitHub Actions), and bringing ML models to CI/CD pipelines
Familiarity with data, feature and pipeline versioning of ML assets using tools such as CML-DVC or similar
Proficient knowledge of Git, Docker, Containers, and Kubernetes
Fluency in Infrastructure as Code tools (e.g., Terraform, Ansible, or Chef, etc.)
Fluency in common system maintenance and scripting languages, such as Python, Bash Shell, etc.
Good knowledge of Linux system administration
E2E production experience with Azure ML, Azure ML pipelines, AWS SageMaker, and GCP AI Platform
Familiarity with setting up hyperparameter tuning/optimization tools, and using them to manage versioning and experiments, model deployment and monitoring, such as Optuna, Kubeflow, AWS SageMaker, Hydrosphere, Seldon, or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8218595
סגור
שירות זה פתוח ללקוחות VIP בלבד