דרושים » דאטה » Sr. Data Platform Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 20 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Platform Engineer to join our community!
As a Senior Data Platform Engineer, you will play a key role in building and evolving Grips modern data platform - the infrastructure that powers product features and analytics across the company.

You will focus on designing and operating scalable, reliable data systems and platform tooling that support our Data Lakehouse, enabling engineers, analysts and research teams to work with data efficiently and with minimal friction.

Responsibilities
Design, build and operate a cloud-native modern data platform.
Develop and optimize data processing frameworks and pipelines across batch and streaming workloads.
Improve developer experience and platform usability through tooling and automation.
Lead and support large-scale data migrations and architectural improvements.
Drive best practices around infrastructure, CI/CD, testing, and system design.
Collaborate with developers, analysts, data scientists and other stakeholders to develop new products and features.
Contribute to a strong engineering culture of ownership, learning, and knowledge sharing.
Requirements:
5+ years of hands-on experience building scalable data infrastructure, particularly around data lake or data warehouse architectures.
Proven experience designing, building and operating production-grade systems and services.
Strong understanding of cloud infrastructure (AWS, GCP, or Azure) and hands-on experience with modern data platforms and tools (e.g., Spark, Kafka, Airflow, dbt, open table formats, or similar).
Strong programming skills in Python and SQL.
Independent, proactive, and ownership-driven mindset.
Background in data platform engineering, backend engineering, DevOps, or DBA - strong advantage.
Experience with containerization technologies - advantage.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8643532
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data & Machine Learning Engineer to operate at the intersection of data platform engineering and machine learning enablement. This role is responsible for building scalable, efficient, and reliable data systems while enabling Data Science and Analytics teams to develop and deploy ML-driven features.

You will take ownership of the data and ML infrastructure layer, ensuring that pipelines, storage models, and compute usage are optimized, while also shaping how data workflows and ML solutions are designed across the organization.


Responsibilities
Data Platform & Infrastructure

Design, build, and maintain scalable data pipelines and storage systems supporting analytics and ML use cases
Ensure compute and cost efficiency across pipelines, storage models, and processing workflows
Own and improve data orchestration, transformation, and serving layers (e.g., Spark, DBT, streaming/batch systems)
Build and maintain shared infrastructure components, including:
IO managers and data access abstractions
Integrations with DBT, Spark, and other data frameworks
Internal tooling to improve developer productivity and reliability
ML Enablement & Collaboration

Partner closely with Data Science to design and productions ML solutions for new features and research initiatives
Translate experimental models into robust, scalable production systems
Support feature engineering, training pipelines, and inference workflows
Help define best practices for ML lifecycle management (training, validation, deployment, monitoring)
Data Quality, Governance & Best Practices

Enforce best practices for building and maintaining data processes across Data Analyst and Data Science teams
Define standards for:
Data modeling and transformations
Pipeline reliability and observability
Testing, versioning, and documentation
Improve data quality, consistency, and discoverability across the organization
Performance & Reliability

Optimize systems for performance, scalability, and cost efficiency
Monitor and troubleshoot data pipelines and ML systems in production
Implement observability (logging, metrics, alerting) across data workflows
Requirements:
Strong programming skills in Python (or similar language)
Proven experience building and maintaining production-grade data pipelines
Hands-on experience with data processing frameworks (e.g., Spark or similar)
Familiarity with DBT or modern data transformation workflows
Experience working with cloud environments (AWS, GCP, or Azure)
Solid understanding of data modeling, distributed systems, and ETL/ELT patterns
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8604541
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a strong, hands-on Data Engineer to join our team and play a key role in building our data infrastructure from the ground up. In this role, you will design and implement scalable data pipelines and platforms, supporting both batch and real-time use cases. You will work closely with analysts and stakeholders to deliver reliable, high-quality data solutions, and take full ownership of data flows - from ingestion to consumption. This is a great opportunity for an executor who enjoys building, moving fast, and making an impact.
What will your job look like?
Design, build, and maintain robust and scalable data pipelines (batch and real-time) end-to-end.
Design and implement scalable, flexible data architectures to support evolving business needs.
Build and manage data platforms, including data lakes and data warehouses.
Integrate multiple data sources (structured and unstructured) into a unified data platform using batch (ETL) and real-time streaming solutions.
Design and implement efficient data models, schemas, and database structures (SQL / NoSQL).
Develop and implement data quality processes to ensure accuracy, consistency, and reliability.
Monitor, optimize, and troubleshoot data infrastructure to meet performance and SLA requirements.
Requirements:
5+ years of hands-on experience as a Data Engineer, building data systems from scratch in dynamic environments.
Bachelors degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
Strong proficiency in Python and advanced SQL, with solid experience in data modeling.
Proven experience designing and building scalable data pipelines (batch and real-time), including streaming technologies such as Kafka.
Strong experience working with AWS, including services such as S3, Athena and DynamoDB.
Experience working with big data processing frameworks such as Spark, and columnar data formats (e.g., Parquet).
Hands-on experience with workflow orchestration tools such as Airflow.
Strong ownership and execution mindset, with excellent problem-solving skills and high attention to detail, and the ability to collaborate effectively and deliver in ambiguous, fast-paced environments.
Experience with data platform technologies such as Databricks, Snowflake - Advantage.
Experience building data platforms using modern lakehouse technologies (e.g., Iceberg) - Advantage.
Fluent in English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8636352
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Analytics Engineer to help design and build the engineering foundation that powers analytics across the organization.
Our goal is to create a modern data environment where analytics development is fast, reliable, scalable, and increasingly automated. This includes building strong data warehouse foundations, scalable modeling layers, and introducing AI-powered tools and automation that accelerate how data products are built and used.
In this role, you will be part of an analytics squad, working closely with analysts and business stakeholders while building the infrastructure, automation frameworks, and intelligent tooling that enable analytics to scale across the organization.
This is a unique opportunity to help build the next generation of the data organization.
Key Responsibilities
Lead AI adoption in the analytics platform, building tools and workflows that automate analytics development, dashboards, and data exploration
Design and build scalable data warehouse models and transformation layers
Build and optimize ETL pipelines and core analytics infrastructure (Bronze / Silver)
Improve performance, reliability, and scalability of the analytics platform
Develop automation and internal tools that accelerate analytics workflows
Enable self-serve data access across the company through semantic layers and reusable datasets
Collaborate with analysts and business teams within an analytics squad.
Requirements:
6+ years of experience in Data Engineering and Analytics Engineering roles, building modern data warehouses and analytics platforms using technologies such as BigQuery, dbt, and Python
Experience with workflow orchestration (Dagster, Airflow, or equivalent) and building reliable, observable data pipelines
Hands-on experience using AI coding platforms and tools to automate data engineering and analytics workflows
Strong engineering practices including version control (Git), testing, code reviews, and CI/CD
Experience building automation systems and internal tools for data teams
Experience working closely with analysts, product teams, and business stakeholders in analytics-driven environments
Strong problem-solving skills with a builder mindset.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600360
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
16/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
You will architect and build a scalable data platform from scratch, engineering high-throughput, low-latency pipelines that drive real-time security analytics and AI-powered systems. As a key member of the Data & AI Algorithms group, you will collaborate with AI/ML engineers, data scientists, and security researchers to design production-grade infrastructure at scale.
This role requires strong ownership, systems thinking, and the agility to operate in a fast-moving environment.
What Youll Do:
Build ML infrastructure to support scalable, low-latency production deployment of data & AI models.
Ensure availability, reliability, and performance of mission-critical data infrastructure
Define and promote best practices for data modeling, orchestration, CI/CD, and infrastructure-as-code
Collaborate cross-functionally to enable data-driven product capabilities
Requirements:
6+ years of hands-on experience building and operating data systems at scale
Production experience with big data frameworks such as Apache Flink, Kafka Streams, or similar distributed data processing systems.
Hands-on experience with modern data lakes and open table formats as Apache Iceberg
Strong Python programming skills
Strong CI/CD and infrastructure-as-code capabilities
Experience with cloud-native data services such as AWS EMR, Athena, Azure Data Explorer
Familiarity with orchestration tools such as Airflow, Kubeflow, Dagster, or similar
Excellent communication skills with a strong ownership and problem-solving mindset
Experience in data modeling
Experience with stream processing systems (Kafka, Flink) and large-scale batch architectures
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8613567
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Data Engineer to join our Platform group in the Data Infrastructure team.
Youll work hands-on to design and deliver data pipelines, distributed storage, and streaming services that keep our data platform performant and reliable. As a senior individual contributor you will lead complex projects within the team, raise the bar on engineering best-practices, and mentor mid-level engineers - while collaborating closely with product, DevOps and analytics stakeholders.
About the Platform group
The Platform Group accelerates our productivity by providing developers with tools, frameworks, and infrastructure services. We design, build, and maintain critical production systems, ensuring our platform can scale reliably. We also introduce new engineering capabilities to enhance our development process. As part of this group, youll help shape the technical foundation that supports our entire engineering team.
Job responsibilities:
Code & ship production-grade services, pipelines and data models that meet performance, reliability and security goals
Lead design and delivery of team-level projects - from RFC through rollout and operational hand-off
Improve system observability, testing and incident response processes for the data stack
Partner with Staff Engineers and Tech Leads on architecture reviews and platform-wide standards
Mentor junior and mid-level engineers, fostering a culture of quality, ownership and continuous improvement
Stay current with evolving data-engineering tools and bring pragmatic innovations into the team.
Requirements:
5+ years of hands-on experience in backend or data engineering, including 2+ years at a senior level delivering production systems
Strong coding skills in Python, Kotlin, Java or Scala with emphasis on clean, testable, production-ready code
Proven track record designing, building and operating distributed data pipelines and storage (batch or streaming)
Deep experience with relational databases (PostgreSQL preferred) and working knowledge of at least one NoSQL or columnar/analytical store (e.g. SingleStore, ClickHouse, Redshift, BigQuery)
Solid hands-on experience with event-streaming platforms such as Apache Kafka
Familiarity with data-orchestration frameworks such as Airflow
Comfortable with modern CI/CD, observability and infrastructure-as-code practices in a cloud environment (AWS, GCP or Azure)
Ability to break down complex problems, communicate trade-offs clearly, and collaborate effectively with engineers and product partners
Bonus Skills:
Experience building data governance or security/compliance-aware data platforms
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools
Experience with data quality frameworks, lineage, or metadata tooling
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8602206
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time and English Speakers
we are looking for a Senior Data Engineer I.
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
21679
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools lke NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8627496
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
05/04/2026
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior Data Engineer to join our R&D organization as part of a backend-oriented team, responsible for building and scaling the core data infrastructure.
In this role, you will design and develop data pipelines that stream and process data directly from production systems. You will play a key role in shaping our data platform, building robust, scalable infrastructure and pipelines using modern technologies, and working hands-on with both new components and existing systems.
Responsibilities
Collaborate as a strong team player within a dynamic, cross-functional environment
Design, develop, and maintain scalable data models, Lakehouse architectures, pipelines, and ETL processes
Enhance data workflows to support efficient real-time and batch processing
Work closely with cross-functional teams to understand data requirements and deliver impactful solutions
Stay up to date with the latest data engineering technologies and best practices, continuously improving our data platform.
Requirements:
6+ years of development experience, including at least 3 years as a Data Engineer
Experience with distributed computing frameworks (e.g., Spark, Flink, EMR)- Must
Experience with Iceberg / Delta Lake / Databricks or similar technologies
Experience designing scalable data storage solutions over object storage (structured and semi-structured data)
Hands-on experience building data pipelines and ingestion systems (batch and/or streaming)
Strong communication skills and ability to work with multiple stakeholders across teams
Proficiency in Python and PySpark- Advantage
Experience in streaming systems and real-time data processing- Advantage
Background in backend engineering or experience working closely with backend teams- Advantage
Experience optimizing data processing performance for cost and efficiency- Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8600850
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Data Engineer.
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspects-ensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and ore.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
20718
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8627494
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
15/04/2026
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Infrastructure Engineer to lead the design, build, and optimization of a modern data platform. The role involves hands-on work with cloud-based data technologies, building data lakes from scratch, and managing large-scale data pipelines while ensuring high performance, cost efficiency, and reliability. You will collaborate closely with data engineers, data science, analytics, and product teams to support business needs.



Key Responsibilities:

Design and build scalable data lakes / platforms using technologies such as Snowflake, Databricks, BigQuery, or Redshift
Develop and optimize large-scale data pipelines for batch and streaming use cases
Ensure high performance, scalability, and cost efficiency across data systems
Work with complex data workflows, AI models, transformations, and orchestration
Apply best practices in data modeling, monitoring, security, and governance
Requirements:
5+ years in data engineering or data infrastructure roles
Proven experience building modern data platforms or data lakes from scratch
Strong Python programming skills and experience with Spark / PySpark
Knowledge of distributed systems and cloud-based architectures
Experience with ETL/ELT processes and handling data at scale
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8611490
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructure-someone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacks-its about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakes-designed for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object store-backed data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
2-4 years in software / solution or infrastructure engineering, with 2-4 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skills-youll be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8616791
סגור
שירות זה פתוח ללקוחות VIP בלבד