דרושים » הנדסה » Senior Solutions Engineer Big Data & Data Infrastructure

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8325726
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer.
As a Senior Data Engineer, youll be more than just a coder - youll be the architect of our data ecosystem. Were looking for someone who can design scalable, future-proof data pipelines and connect the dots between DevOps, backend engineers, data scientists, and analysts.
Youll lead the design, build, and optimization of our data infrastructure, from real-time ingestion to supporting machine learning operations. Every choice you make will be data-driven and cost-conscious, ensuring efficiency and impact across the company.
Beyond engineering, youll be a strategic partner and problem-solver, sometimes diving into advanced analysis or data science tasks. Your work will directly shape how we deliver innovative solutions and support our growth at scale.
Responsibilities:
Design and Build Data Pipelines: Architect, build, and maintain our end-to-end data pipeline infrastructure to ensure it is scalable, reliable, and efficient.
Optimize Data Infrastructure: Manage and improve the performance and cost-effectiveness of our data systems, with a specific focus on optimizing pipelines and usage within our Snowflake data warehouse. This includes implementing FinOps best practices to monitor, analyze, and control our data-related cloud costs.
Enable Machine Learning Operations (MLOps): Develop the foundational infrastructure to streamline the deployment, management, and monitoring of our machine learning models.
Support Data Quality: Optimize ETL processes to handle large volumes of data while ensuring data quality and integrity across all our data sources.
Collaborate and Support: Work closely with data analysts and data scientists to support complex analysis, build robust data models, and contribute to the development of data governance policies.
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field.
Experience: 5+ years of hands-on experience as a Data Engineer or in a similar role.
Data Expertise: Strong understanding of data warehousing concepts, including a deep familiarity with Snowflake.
Technical Skills:
Proficiency in Python and SQL.
Hands-on experience with workflow orchestration tools like Airflow.
Experience with real-time data streaming technologies like Kafka.
Familiarity with container orchestration using Kubernetes (K8s) and dependency management with Poetry.
Cloud Infrastructure: Proven experience with AWS cloud services (e.g., EC2, S3, RDS).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8320416
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 21 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Backend Engineer - Data Infrastructure to design and optimize high-performance infrastructure capable of handling massive data volumes. In this role, youll lead backend development, architect scalable data pipelines, and ensure seamless data processing. Your expertise in distributed systems and performance optimization will drive innovation, transforming intricate security challenges into efficient, resilient solutions.

Responsibilities
Be a significant part of the development of backend infrastructure to efficiently handle, process, and store vast volumes of data.
Architect and build a scalable, high-performance backend system that supports various services within the platform.
Translate intricate requirements into meticulous backend design plans, maintaining a focus on software design, code quality, and performance.
Collaborate with cross-functional teams to implement backend and data-handling techniques.
Apply your expertise to create robust backend solutions.
Leverage your proficiency in cloud platforms such as AWS, GCP, or Azure to drive strong backend engineering practices.
Demonstrate strong debugging skills, identifying issues such as race conditions and memory leaks within the backend system. Solve complex backend problems with an analytical mindset and contribute to a positive team dynamic.
Bring your excellent interpersonal skills to foster collaboration and maintain a positive attitude within the team.
Requirements:
5+ years of experience in server-side development using Java, Python, Go, or .NET.
Strong background in microservices architecture and related tools (Docker, Kubernetes, etc.).
Hands-on experience with large-scale applications, handling high data volumes and intensive traffic.
Proficiency with various database technologies such as MySQL, Cassandra, Neo4J, Google BigQuery, Amazon Redshift, Elasticsearch, and PostgreSQL.
Solid understanding of message queuing, stream processing, and scalable big data storage solutions.
Experience in building and optimizing data pipelines and analytics workflows.
Familiarity with streaming technologies such as Amazon Kinesis and Apache Kafka.
Proven ability to bootstrap projects and develop systems from the ground up.
Strong ownership and leadership skills, with a track record of driving initiatives forward.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8327508
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
24/07/2025
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Scientist (Applied AI).
As a Senior Data Scientist on our Applied AI team, you will join our Tel Aviv office and play a hands-on, end-to-end role in delivering innovative capabilities that help mayors and other city leaders understand their communities and improve the lives of millions worldwide. Reporting to the Applied AI Team Lead, you will collaborate with product, engineering, and fellow data-science teammates to turn cutting-edge research into production-ready solutionsquickly, reliably, and with maximum real-world impact. Youll work with a rich mix of data sources (including social media, news stories, survey results, resident feedback, and more) to create models and AI-powered features that scale.
Day to Day:
Design, build, and deploy AI and machine-learning solutions, from data exploration through modeling, evaluation, and integration into customer-facing products and internal tools.
Optimize models for quality and scalability through feature engineering, hyper-parameter tuning, runtime profiling, and thoughtful architectural choices.
Build and maintain data pipelines using tools such as Airflow, Spark, and Databricks to ensure clean, reliable inputs for downstream models.
Collaborate closely with product managers, engineers, and designers to refine problem statements, iterate rapidly, and ship impactful features on schedule.
Champion technical excellence by conducting code reviews, sharing best practices, and mentoring teammates across data science and engineering.
Stay current with the latest developments in AI-including LLMs, RAG systems, and AI agents-and proactively propose ways to incorporate new techniques into our workflows.
Work an in-person or hybrid schedule, spending at least three days per week in our Tel Aviv office.
Requirements:
5 + years of hands-on experience developing and deploying machine-learning or data-science solutions with Python and SQL.
Proven, end-to-end experience building AI- and machine learning-based solutions from prototype to production deployment.
Demonstrated success shipping data-intensive services to production on cloud infrastructure (AWS preferred) using data tools such as PostgreSQL, Databricks, Spark, or Airflow.
Deep understanding of machine-learning fundamentals and practical expertise with frameworks such as TensorFlow, PyTorch, or scikit-learn.
Expertise in machine learning metrics and quality control.
Solid understanding of software-engineering best practices (including design patterns, data structures, and version control).
Excellent interpersonal and communication skills, with the ability to explain complex technical concepts to non-technical stakeholders and collaborate across teams.
Its even better if you have:
Experience with Agile development in fast-paced, delivery-driven environments.
Familiarity with CI/CD practices, containers, Kubernetes, and serverless or microservice architectures.
Experience with geospatial analysis, government data, survey research, or civic-tech applications.
A track record of contributing to open-source projects.
A college or graduate degree in a relevant field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8273710
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineering Tech Lead.
What will you be responsible for?
Lead the design and development of scalable, high-performance data workflows, including both batch pipelines and real-time data products.
Define, implement, and enforce engineering best practices related to code quality, testing, CI/CD pipelines, observability, and documentation.
Mentor, support, and grow a team of data engineers, fostering a collaborative and high-performance engineering culture.
Identify opportunities to create new data assets and features that expand product capabilities and value proposition.
Drive architectural decision-making in areas of data modeling, storage solutions, and compute resources within cloud environments such as Databricks and Snowflake.
Collaborate closely with cross-functional stakeholdersincluding Product, DevOps, and R&Dto ensure effective delivery and platform stability.
Promote and champion a data-driven mindset across the organization, balancing technical rigor with business context and strategic goals.
Requirements:
Minimum 5 years of hands-on experience designing, building, and maintaining large-scale data pipelines for both batch processing and streaming use cases.
Deep expertise in Python and SQL, with a focus on writing clean, performant, and maintainable code.
Strong analytical and problem-solving skills, with the ability to break down complex technical challenges and align solutions to business objectives.
Solid background in data modeling, analytics, and designing architectures for scalability, performance, and cost efficiency.
Practical experience working with modern OLAP systems and cloud data platforms, including Databricks, Snowflake, or BigQuery.
Familiarity with AI agent protocols (such as A2A, MCP) and LLM-related technologies (e.g., vector databases, embeddings) is a plus.
AI-savvy, with comfort adopting AI tools and staying current with emerging AI trends and technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280795
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
04/08/2025
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are looking for a Senior Backend Engineer to join our AI/ML team. In this role, youll work closely with data scientists to transform cutting-edge machine learning models into scalable, production-ready services. You will take ownership of designing, building, and maintaining the backend systems that power our AI-driven features.

This is a key position that bridges the gap between data science and production engineering, ensuring high performance, reliability, and maintainability of our ML-powered products.

Responsibilities:
Collaborate with data scientists to understand modeling outputs and convert them into deployable services.
Design and develop robust, scalable backend systems and microservices to support AI use cases.
Own the deployment and monitoring of ML models in production (with CI/CD, logging, observability).
Implement data processing pipelines in support of model training and inference.
Ensure software adheres to best practices in architecture, testing, and documentation.
Optimize model inference for latency, throughput, and resource efficiency.
Contribute to design decisions and technical strategy alongside AI and infrastructure leads.
Requirements:
Requirements:
5+ years of experience as a backend/software engineer, preferably in Python, Go, or Java.
Strong experience with designing APIs, building microservices, and integrating third-party services.
Familiarity with ML workflows: model serving, feature extraction, and batch vs real-time inference.
strong architectural/design skills, including working with message queues like Kafka, relational and NoSQL databases, and distributed systems.
Experience deploying services in containerized environments (e.g., Docker, Kubernetes).
Proficient with cloud-native tools or on-prem equivalents (e.g., logging, tracing, metrics).
Knowledge of data processing frameworks (e.g., Pandas, Spark, Airflow) is a plus.
Comfortable reading and working with Python-based ML code (scikit-learn, TensorFlow, PyTorch, etc.).
Strong ownership mindset and a collaborative attitude.

Nice to Have:
Experience with model versioning and ML serving frameworks (e.g., MLflow, Seldon, Triton).
Understanding of data privacy/security implications in model and data pipelines.
Experience working in cross-functional teams with data scientists and product owners.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8288089
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/08/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
A fast-growing, well-funded startup in the data domain is looking for a Software Development Team Lead to lead a team of engineers, reporting to the Director of Engineering. As a Software Development Team Lead, you’ll lead a team of back-end, full-stack, and data engineers, working closely with Product Managers and other stakeholders to plan, kickoff, and execute new product and feature development projects. Our ideal candidate is someone who values excellence, is hungry for success, and thrives when working with the best. Job Summary: As an Software Development Team Lead, you'll be responsible for overseeing the design, development, and implementation of robust software solutions, including strategic full-stack applications and our big data architecture. You'll lead a team of engineers and collaborate with cross-functional teams to drive initiatives that support business objectives. Your expertise in big data technologies, including Elastic, Kafka, and Java, will be critical in ensuring the scalability and performance of our systems. What you’ll do:
? Lead and mentor a team of engineers in the design and development of software solutions, including big data solutions and full-stack applications. ? Architect and implement data processing pipelines ? real-time data streaming. ? Optimize and manage search capabilities. ? Develop strategic full-stack applications to meet business needs. ? Collaborate with product managers, data analysts, and other stakeholders to gather requirements and translate them into technical specifications. ? Oversee code reviews, ensure best practices in coding and data handling, and maintain high-quality standards in software development. ? Stay up-to-date with emerging trends and technologies in software engineering and big data, and recommend improvements to our architecture and processes. ? Troubleshoot and resolve issues in a timely manner to minimize downtime and ensure system reliability. ? Contribute to strategic planning and decision-making regarding architecture and tools. ? Collect and analyze your team’s KPIs. ? Participate in customer calls, understand their use cases, and solve their problems. ? Collaborate with software teams across different locations.
Requirements:
What you’ll bring:
? Bachelor’s degree in Computer Science, Engineering, or a related field; Master’s degree preferred. ? 3+ years of experience leading and managing a team of 5–8 engineers ? 8+ years of experience in the software engineering field. ? 5+ years of experience in big data technologies, with a focus on Elastic and Kafka. ? Proficiency in Java programming and experience with related frameworks. ? Strong understanding of data modeling, ETL processes, and data warehousing. ? Proven leadership skills with the ability to motivate and guide a team. ? Excellent problem-solving abilities and strong analytical skills. ? Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders. ? A solid understanding of CI/CD principles. ? Experience working with both external and in-house APIs and SDKs Advantages:
? Experience working directly with customers ? Experience with Docker, Kubernetes ? Experience with cloud platforms (e.g., AWS or Azure)
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8295229
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to design and implement high-scale, data-intensive platforms, research and develop algorithmic solutions, and collaborate on key company initiatives. You will play a critical role within core data teams, which are responsible for managing and optimizing fundamental data assets.
What will you be responsible for?
Solve Complex Business Problems with Scalable Data Solutions
Develop and implement robust, high-scale data pipelines to power core assets.
Leverage cutting-edge technologies to tackle complex data challenges and enhance business operations.
Collaborate with Business Stakeholders to Drive Impact
Work closely with Product, Data Science, and Analytics teams to define priorities and develop solutions that directly enhance core products and user experience.
Build and Maintain a Scalable Data Infrastructure
Design and implement scalable, high-performance data infrastructure to support machine learning, analytics, and real-time data processing.
Continuously monitor and optimize data pipelines to ensure reliability, accuracy, and efficiency.
Requirements:
3+ years of hands-on experience designing and implementing large-scale, server-side data solutions
4+ years of programming experience, preferably in Python and SQL, with a strong understanding of data structures and algorithms
Proven experience in building algorithmic solutions, data mining, and applying analytical methodologies to optimize data processing and insights
Proficiency with orchestration tools such as Airflow, Kubernetes, and Docker Swarm, ensuring seamless workflow automation
Experience working with Data Lakes and Apache Spark for processing large-scale datasets strong advantage
Familiarity with AWS services (S3, Glue, EMR, Redshift) nice to have
Knowledge of tools such as Kafka, Databricks, and Jenkins a plus
Strong command of a variety of storage engines, including Relational (PostgreSQL, MySQL), Document-based (MongoDB), Time-series / Search (ClickHouse, Elasticsearch), Key-value (Redis)
Comfortable working with AI tools and staying ahead of emerging technologies and trends
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280797
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
5 ימים
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
Required Big Data & GenAI Engineering Lead
As a Big Data & GenAI Engineering Lead within our Data & AI Department, you will play a pivotal role in building the data and AI backbone that empowers product innovation and intelligent business decisions. You will lead the design and implementation of our next-generation lakehouse architecture, real-time data infrastructure, and GenAI-enriched solutions, helping drive automation, insights, and personalization at scale. In this role, you will architect and optimize our modern data platform while also integrating and operationalizing Generative AI models to support go-to-market use cases. This includes embedding LLMs and vector search into core data workflows, establishing secure and scalable RAG pipelines, and partnering cross-functionally to deliver impactful AI applications.
As a Big Data & GenAI Engineering Lead you will...
Design, lead, and evolve our petabyte-scale Lakehouse and modern data platform to meet performance, scalability, privacy, and extensibility goals.
Architect and implement GenAI-powered data solutions, including retrieval-augmented generation (RAG), semantic search, and LLM orchestration frameworks tailored to business and developer use cases.
Partner with product, engineering, and business stakeholders to identify and develop AI-first use cases, such as intelligent assistants, code insights, anomaly detection, and generative reporting.
Integrate open-source and commercial LLMs securely into data products using frameworks such as LangChain, or similar, to augment AI capabilities into data products.
Collaborate closely with engineering teams to drive instrumentation, telemetry capture, and high-quality data pipelines that feed both analytics and GenAI applications.
Provide technical leadership and mentorship to a cross-functional team of data and ML engineers, ensuring adherence to best practices in data and AI engineering.
Lead tool evaluation, architectural PoCs, and decisions on foundational AI/ML tooling (e.g., vector databases, feature stores, orchestration platforms).
Foster platform adoption through enablement resources, shared assets, and developer-facing APIs and SDKs for accessing GenAI capabilities.
Requirements:
8+ years of experience in data engineering, software engineering, or MLOps, with hands-on leadership in designing modern data platforms and distributed systems.
Proven experience implementing GenAI applications or infrastructure (e.g., building RAG pipelines, vector search, or custom LLM integrations).
Deep understanding of big data technologies (Kafka, Spark, Iceberg, Presto, Airflow) and cloud-native data stacks (e.g., AWS, GCP, or Azure).
Proficiency in Python and experience with GenAI frameworks like LangChain, LlamaIndex, or similar.
Familiarity with modern ML toolchains and model lifecycle management (e.g., MLflow, SageMaker, Vertex AI).
Experience deploying scalable and secure AI solutions with proper attention to privacy, hallucination risk, cost management, and model drift.
Ability to operate in ambiguity, lead complex projects across functions, and translate abstract goals into deliverable solutions.
Excellent communication and collaboration skills, with a passion for pushing boundaries in both data and AI domains..
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8321521
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 19 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a hands-on Data Specialist to join our growing data group, working on the practical backbone of high-scale, financial-grade systems. Youll work closely with engineers, BI, product, and business stakeholders, expert in design, build, and optimize data pipelines and integrations in a cloud-native environment.
If you thrive on solving complex data challenges, enjoy getting deep into code, and want to make an impact on fintech infrastructure, wed love to meet you.
Your Day-to-Day:
Develop, maintain, and optimize robust data pipelines and integrations across multiple systems
Build and refine data models to support analytics and operational needs
Work hands-on with data orchestration, transformation, and cloud infrastructure (AWS/Azure)
Collaborate with engineering, BI, and business teams to translate requirements into scalable data solutions
Contribute to data governance, data quality, and monitoring initiatives
Support implementation of best practices in data management and observability
Requirements:
8+ years in data engineering, data architecture, or similar roles
Deep hands-on experience with PostgreSQL, Snowflake, Oracle etc
Strong experience with ETL/ELT, data integration (Kafka, Airflow)
Proven SQL and Python skills (must)
Experience with AWS or Azure cloud environments
Familiarity with BI tools (Looker, Power BI)
Knowledge of Kubernetes and distributed data systems
Experience in financial systems or fintech (advantage)
Strong ownership, problem-solving ability, and communication skills
Comfort working in a fast-paced, multi-system environment
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8327844
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and English Speakers
We are growing and are looking for a Senior Data Platform Engineer
who value personal and career growth, team-work, and winning!
What your day will look like:
Design, plan, and build all aspects of the platforms data, machine learning (ML) pipelines, and infrastructure.
Build and optimize an AWS-based Data Lake using best practices in cloud architecture, data partitioning, metadata management, and security to support enterprise-scale data operations.
Collaborate with engineers, data analysts, data scientists, and other stakeholders to understand data needs.
Solve challenging data integration problems, utilizing optimal ETL/ELT patterns, frameworks, query techniques, and sourcing from structured and unstructured data sources.
Lead end-to-end data projects from infrastructure design to production monitoring.
Requirements:
Have 5+ years of hands-on experience in designing and maintaining big data pipelines across on-premises or hybrid cloud environments, with proficiency in both SQL and NoSQL databases within a SaaS framework.
Proficient in one or more programming languages: Python, Scala, Java, or Go.
Experienced with software engineering best practices and automation, including testing, code reviews, design documentation, and CI/CD.
Experienced in building and designing ML/AI-driven production infrastructures and pipelines.
Experienced in developing data pipelines and maintaining data lakes on AWS - big advantage.
Familiar with technologies such as Kafka, Snowflake, MongoDB, Airflow, Docker, Kubernetes (K8S), and Terraform - advantage.
Bachelor's degree in Computer Science or equivalent experience.
Strong communication skills, fluent in English, both written and verbal.
A great team player with a can-do approach.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8313501
סגור
שירות זה פתוח ללקוחות VIP בלבד