משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP

חברות מובילות
כל החברות
כל המידע למציאת עבודה
כל מה שרציתם לדעת על מבחני המיון ולא העזתם לשאול
זומנתם למבחני מיון ואין לכם מושג לקראת מה אתם ה...
קרא עוד >
לימודים
עומדים לרשותכם
מיין לפי: מיין לפי:
הכי חדש
הכי מתאים
הכי קרוב
טוען
סגור
לפי איזה ישוב תרצה שנמיין את התוצאות?
Geo Location Icon

לוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Job Type: Full Time
Required Data Infrastructure Engineer
What Youll Do:
Design, implement, and enhance robust and scalable infrastructure that enables efficient deployment, monitoring, and management of machine learning models in production. In this role, you will bridge the gap between research and production environments, streamline data and feature pipelines, optimize model serving, and ensure governance and reproducibility across our ML lifecycle.
Responsibilities:
Decouple data prep from model training to accelerate experimentation and deployment
Build efficient data workflows with versioning, lineage, and optimized resource use (e.g., Snowflake, Dask, Airflow)
Develop reproducible training pipelines with MLflow, supporting GPU and distributed training
Automate and standardize model deployment with pre-deployment testing (E2E, dark mode)
Maintain a model repository with traceability, governance, and consistent metadata
Monitor model performance, detect drift, and trigger alerts across the ML lifecycle
Enable model comparison with A/B testing and continuous validation
Support infrastructure for deploying LLMs, embeddings, and advanced ML use cases
Manage a unified feature store with history, drift detection, and centralized feature/label tracking
Establish a single source of truth for features across research and production across research and production.
Requirements:
3+ years of experience as an MLOps, ML Infrastructure, or Software Engineer in ML-driven environments, preferably with PyTorch.
Strong proficiency in Python, SQL (leveraging platforms like Snowflake and RDS), and distributed computing frameworks (e.g., Dask, Spark) for processing large-scale data in formats like Parquet.
Hands-on experience with feature stores, key-value stores like Redis, MLflow (or similar tools), Kubernetes, Docker, cloud infrastructure (AWS, specifically S3 and EC2), and orchestration tools (Airflow).
Proven ability to build and maintain scalable and version-controlled data pipelines, including real-time streaming with tools like Kafka.
Experience in designing and deploying robust ML serving infrastructures with CI/CD automation.
Familiarity with monitoring tools and practices for ML systems, including drift detection and model performance evaluation.
Nice to Have:
Experience with GPU optimization frameworks and distributed training.
Familiarity with advanced ML deployments, including NLP and embedding models.
Knowledge of data versioning tools (e.g., DVC) and infrastructure-as-code practices.
Prior experience implementing structured A/B testing or dark mode deployments for ML models.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8367169
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Job Type: Full Time
Required Data Engineer
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities:
Design and build data solutions that support our core business goals, from enabling capital market transactions (loan sales and securitization) to providing reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have:
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
What Else:
Energetic and data-enthusiastic mindset.
Ability to translate complex technical work into business impact.
Analytical and detail-oriented.
Strong communication skills with both technical and business teams.
Self-motivated, fast learner, and team player.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8367166
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות שנמחקו
ישנן 39 משרות בחיפה וסביבתה,שרון אשר לא צויינה בעבורן עיר הצג אותן >