דרושים » תוכנה » Senior Solutions Engineer Big Data & Data Infrastructure

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 53 דקות
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object storebacked data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8442983
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
03/11/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8397812
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/11/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are the leader in hybrid-cloud security posture management, using the attackers perspective to find and remediate critical attack paths across on-premises and multi-cloud networks. we are looking for a talented Senior data Engineer Join a core team of experts responsible for developing innovative cyber-attack techniques for Cloud-based environments (AWS, Azure, GCP, Kubernetes) that integrate into our fully automated attack simulation. About the Role:We are seeking an experienced Senior data Engineer to join our dynamic data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure, ensuring the availability, reliability, and quality of our data. This role requires strong technical expertise, problem-solving skills, and the ability to collaborate across teams to deliver data -driven solutions.Key Responsibilities:
* Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes.
* Develop and optimize data models, schemas, and Storage solutions to support analytics and Machine Learning initiatives.
* Collaborate with software engineers and product managers to understand data requirements and deliver high-quality solutions.
* Ensure data quality, integrity, and governance across multiple sources and systems.
* Monitor and troubleshoot data workflows, resolving performance and reliability issues.
* Evaluate and implement new data technologies and frameworks to improve the data platform.
* Document processes, best practices, and data architecture.
* Mentor junior data engineers and contribute to team knowledge sharing.
Requirements:
Required Qualifications:
* Bachelors or Masters degree in Computer Science, Engineering, or a related field.
* 5+ years of experience in data engineering, ETL development, or a similar role.
* Strong proficiency in SQL and experience with relational and NoSQL databases.
* Experience with data pipeline frameworks and tools such as: Apache Spark, Airflow & Kafka. - MUST
* Familiarity with cloud platforms (AWS, GCP, or Azure) and their data services.
* Solid programming skills in Python, JAVA, or Scala.
* Strong problem-solving, analytical, and communication skills.
* Knowledge of data governance, security, and compliance standards.
* Experience with data warehousing, Big Data technologies, and data modeling best practices such as ClickHouse, SingleStore, StarRocks. Preferred Qualifications (Advantage):
* Familiarity with Machine Learning workflows and MLOps practices.
* Work with data Lakehouse architectures and technologies such as Apache Iceberg.
* Experience working with data ecosystems in Open Source/On-Premise environments. Why Join Us:
* Work with cutting-edge technologies and large-scale data systems.
* Collaborate with a talented and innovative team.
* Opportunities for professional growth and skill development.
* Make a direct impact on data -driven decision-making across the organization.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8401647
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8430196
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/10/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a top-notch Senior Software Engineer to help us tackle the toughest challenge in cybersecurity: turning endless amounts of data into crisp, easy, and actionable insights.

Responsibilities
Collaborate with a senior Agile Scrum team to design, develop, and maintain large-scale, cloud-based data processing pipelines and backend components. Work with cutting-edge technologies including Spark, Kubernetes, AWS, and modern data lakes like Databricks and Snowflake.
Design and implement scalable, cost-effective solutions that deliver high performance and are easy to maintain. Tackle complex, high-scale problems and drive performance optimization and cost-efficiency across the data pipeline.
Partner with engineers across our company R&D and Product teams to enhance our platform and provide capabilities for internal and external users to build data transformations and detection pipelines at scale.
Build robust monitoring and observability solutions to ensure full visibility across all stages of data processing.
Stay current with trends in big data processing and distributed computing. Contribute to code quality through regular reviews and adherence to best practices.
Requirements:
4+ years of experience as a Backend Engineer
3+ years of hands-on experience in Scala/Python/JAVA and cloud architecture (EMR/K8S).
Deep technical expertise in distributed systems, stream processing, and data modeling of large data sets.
Proven track record of delivering scalable, and secure systems in a fast-paced working environment.
Experience with data governance practices, data security, and performance & cost optimization containers, working with AWS services such as S3, EKS, and more.
Strong problem-solving skills and ability to work independently.
A team player with excellent communication skills.
B.Sc. in computer science or an equivalent.
Advantages:

Experience in Big Data frameworks such as Spark
Experience with modern Data lakes/warehouses such as Snowflake and Databricks.
Production experience working with SaaS environments.
Experience in data modeling.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8389801
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Algo Data Engineer
Realize your potential by joining the leading performance-driven advertising company!
As a Senior Algo Data Engineer on the Infra group, youll play a vital role in develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools.
About Algo platform:
The objective of the algo platform group is to own the existing algo platform (including health, stability, productivity and enablement), to facilitate and be involved in new platform experimentation within the algo craft and lead the platformization of the parts which should graduate into production scale. This includes support of ongoing ML projects while ensuring smooth operations and infrastructure reliability, owning a full set of capabilities, design and planning, implementation and production care.
The group has deep ties with both the algo craft as well as the infra group. The group reports to the infra department and has a dotted line reporting to the algo craft leadership.
The group serves as the professional authority when it comes to ML engineering and ML ops, serves as a focal point in a multidisciplinary team of algorithm researchers, product managers, and engineers and works with the most senior talent within the algo craft in order to achieve ML excellence.
How youll make an impact:
As a Senior Algo Data Engineer, youll bring value by:
Develop, enhance and maintain highly scalable Machine-Learning infrastructures and tools, including CI/CD, monitoring and alerting and more
Have end to end ownership: Design, develop, deploy, measure and maintain our machine learning platform, ensuring high availability, high scalability and efficient resource utilization
Identify and evaluate new technologies to improve performance, maintainability, and reliability of our machine learning systems
Work in tandem with the engineering-focused and algorithm-focused teams in order to improve our platform and optimize performance
Optimize machine learning systems to scale and utilize modern compute environments (e.g. distributed clusters, CPU and GPU) and continuously seek potential optimization opportunities.
Build and maintain tools for automation, deployment, monitoring, and operations.
Troubleshoot issues in our development, production and test environments
Influence directly on the way billions of people discover the internet
Our tech stack:
Java, Python, TensorFlow, Spark, Kafka, Cassandra, HDFS, vespa.ai, ElasticSearch, AirFlow, BigQuery, Google Cloud Platform, Kubernetes, Docker, git and Jenkins.
Requirements:
Experience developing large scale systems. Experience with filesystems, server architectures, distributed systems, SQL and No-SQL. Experience with Spark and Airflow / other orchestration platforms is a big plus.
Highly skilled in software engineering methods. 5+ years experience.
Passion for ML engineering and for creating and improving platforms
Experience with designing and supporting ML pipelines and models in production environment
Excellent coding skills in Java & Python
Experience with TensorFlow a big plus
Possess strong problem solving and critical thinking skills
BSc in Computer Science or related field.
Proven ability to work effectively and independently across multiple teams and beyond organizational boundaries
Deep understanding of strong Computer Science fundamentals: object-oriented design, data structures systems, applications programming and multi threading programming
Strong communication skills to be able to present insights and ideas, and excellent English, required to communicate with our global teams.
Bonus points if you have:
Experience in leading Algorithms projects or teams.
Experience in developing models using deep learning techniques and tools
Experience in developing software within a distributed computation framework.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437886
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure team, youll help build data platform for our growing stack of products, customers, and microservices.

Our platform ingests and processes data from operational databases, telematics, and diverse product sources. Youll build robust backend services and data processing pipelines, leveraging state-of-the-art frameworks and cloud-native solutions, all while collaborating with Data Engineers, ML Engineers, Analysts, and Product Managers to turn real-world needs into resilient systems.

In this role youll:
Design & Develop Backend Services: Lead the design and implementation of backend systems and APIs for our distributed data platform

Build Data Ingestion & Processing: Architect and develop scalable ingestion pipelines with streaming ETL, Change Data Capture, and large-scale batch and stream processing

Own Data Platform Infrastructure: Implement and optimize scheduling, workflow orchestration, and data governance tools to support high-quality, compliant data flows

Drive Engineering Standards: Establish and promote backend best practices, ensuring high reliability, code quality, and maintainability across the team

Cross-functional Collaboration: Work closely with data engineers, ML engineers, product managers, and analysts to translate business needs into scalable backend systems

Mentorship: Share your backend and infrastructure expertise by collaborating, reviewing, and mentoring fellow engineers.
Requirements:
5+ years of experience as a Backend, Data, or Infrastructure Engineer building large-scale backend systems and data-driven platforms.

B.S. in Computer Science or a similar field

Proven backend development skills with expertise in Python. Additional languages, a plus.

Proven experience with distributed systems, microservices, building and maintaining robust backend APIs

Proficiency with databases (SQL, NoSQL), data modeling, and streaming data architectures.

Ability to work in an office environment a minimum of 3 days a week

Enthusiastic about learning and adapting to the rapidly evolving world of AI and data-driven engineering

Past experience with modern data stacks (e.g., Snowflake, Kafka, Airflow, DBT, Spark), an advantage

Strong understanding of cloud infrastructure and orchestration (preferably AWS, K8s, Terraform/Pulumi), an advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8421152
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Software Engineer to join our growing R&D team. In this role, you will play a critical part in designing, building, and optimizing complex systems that power our AI-driven platform.
Youll work across the stack- primarily on backend services - with opportunities to influence architectural decisions and build highly scalable and performant systems.
Youll collaborate closely with AI, product, and frontend teams to bring advanced features to life and ensure a seamless, intelligent experience for our users.
This is a high-impact role for someone who is passionate about engineering excellence, eager to shape systems end-to-end, and ready to grow with a fast-moving, AI-first company.
Key Responsibilities:
Design, develop, and maintain robust backend systems and services.
Ensure the scalability, performance, and security of backend components.
Collaborate with front-end developers and data teams to integrate user-facing elements with server-side logic.
Optimize the platform's infrastructure to handle large-scale data processing and analysis.
Troubleshoot and debug complex issues, identifying and implementing the most effective solutions.
Contribute to the architecture and system design decisions for the backend infrastructure.
Stay up to date with industry trends and new technologies to continuously improve backend performance.
Requirements:
7+ years of software development experience in a fast-paced SaaS environment.
Strong experience with server-side technologies, particularly Node.js, Python and SQL.
In-depth knowledge of databases; experience in schema design and optimization.
Expertise in API development and microservices architecture.
Familiarity with cloud platforms such as Google Cloud/AWS.
Understanding of containerization and orchestration tools (Docker, Kubernetes).
Experience with message queues (e.g., RabbitMQ, Kafka or their cloud alternatives such as SQS/pubsub) and data processing.
Experience with client-side technologies (e.g. React) is a plus
Applied AI or video editing knowledge is a big plus.
Excellent problem-solving skills with a focus on scalability and performance.
Ability to work independently while also thriving in a collaborative team environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8439280
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 49 דקות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a senior backend engineer who is truly cross functional possesses strong engineering foundations for developing highly performant services.
The ideal candidate will have a deep understanding of algorithms and Python fundamentals, as well as an understanding of the capabilities and limitations of LLMs.
They will be able to quickly collaborate with their product counterparts to iterate over new, cutting edge features and capabilities and get them to the hands of our users.
Collaborating closely with product teams, they will quickly ideate, develop and deliver innovative features and capabilities to our users.
If you have the skills and experience we are looking for, we encourage you to apply for this exciting opportunity to join our team and make a significant impact on the future of AI.
Role and Responsibilities:
Build the framework used to supercharge our algorithm teams to write scalable, performant algorithms
E2E ownership of production services, databases and infrastructure
Work in fast iteration loops with business/product managers to architect and deliver new products and capabilities
Drive the architectural design, including dependent services and service interactions (APIs & SDKs)
Apply judgment and experience to balance trade-offs between competing interests and optional solutions, considering profiling data to inform decisions on performance and resource utilization
Help with coaching and mentoring team members, and maintaining high standards of software quality within the team.
Requirements:
5+ years of programming experience. Proficiency with Python is a huge plus
Experience with building SaaS applications from conception to production
Strong hands-on experience with production systems, continuous integration and deployment and testing best practices
Performance engineering experience to ensure applications are built to scale, run, and perform for varying demands
Able to clearly articulate architecture patterns of complex systems, with business and technical implications, to executive and customer stakeholders
Collaborate with engineers across the organization to champion standard software patterns and the reuse of shared libraries and services
Advantage: Experience working with Large Language Models (LLMs) and cutting-edge AI technologies
Advantage: Experience with the following technologies is a plus: Celery, Databases such as postgres, redis,and pgvector support such as Aurora & alloydb), docker containers, etc
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8442994
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As Senior Machine Learning Engineer, youll work with top notch engineers and data scientists from the team on bringing it to the next level and enabling optimal user experience. The work will focus on building, deploying and serving GenAI capabilities (Agents, Tools and the orchestration between them) using the most advanced technologies and models.
Key Job Responsibilities and Duties:
Deploying machine learning models: Design, develop and deploy in collaboration with scientists, scalable machine learning models and algorithms that provide content related insights and generative AI applications, ensuring scalability, efficiency, and accuracy.
Evaluating possible architecture solutions by taking into account cost, business requirements, emerging technologies, and technology requirements, like latency, throughput, and scale.
Generative AI Development: Contribute to the development of generative models such as GPT (Generative Pre-trained Transformer) variants or similar architectures for creative content generation, Q&A, translation or other innovative applications.
Deployment and integration: Work closely with software engineers to integrate machine learning models into production systems. Ensure seamless deployment and efficient model inference in real-time environments. Collaborate with DevOps to implement effective monitoring and maintenance strategies.
Owning a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and acting accordingly when violated.
Maintain clean, scalable code, ensuring reproducibility and easy integration of models into production environments, including CI/CD.
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Machine Learning Engineer or a similar role, with a consistent record of successfully delivering ML solutions.
Strong programming skills in languages such as Python and Java.
Experience with cloud frameworks like AWS sagemaker for training, evaluation and serving models using TensorFlow, PyTorch, or scikit-learn.
Experience with LLMs, Agents and MCP in production environments.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Experience with data at scale using MySQL, Pyspark, Snowflake and similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Deep understanding of machine learning algorithms, statistical models, and data structures.
Experience in deploying large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8430189
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure group, youll help build Lemonades data platform for our growing stack of products, customers, and microservices.

We ingest our data from our operational DBs, telematics devices, and more, working with several data types (both structured and unstructured). Our challenge is to provide building tools and infrastructure to empower other teams leveraging data-mesh concepts.

In this role youll:
Help build Lemonades data platform, designing and implementing data solutions for all application requirements in a distributed microservices environment

Build data-platform ingestion layers using streaming ETLs and Change Data Capture

Implement pipelines and scheduling infrastructures

Ensure compliance, data-quality monitoring, and data governance on data platform

Implement large-scale batch and streaming pipelines with data processing frameworks

Collaborate with other Data Engineers, Developers, BI Engineers, ML Engineers, Data Scientists, Analysts and Product managers

Share knowledge with other team members and promote engineering standards
Requirements:
5+ years of prior experience as a data engineer or data infra engineer

B.S. in Computer Science or equivalent field of study

Knowledge of databases (SQL, NoSQL)

Proven success in building large-scale data infrastructures such as Change Data Capture, and leveraging open source solutions such as Airflow & DBT, building large-scale streaming pipelines, and building customer data platforms

Experience with Python, Pulumi\Terraform, Apache Spark, Snowflake, AWS, K8s, Kafka

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420906
סגור
שירות זה פתוח ללקוחות VIP בלבד