דרושים » תוכנה » Big Data Infrastructure Engineer-2572

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 20 דקות
Location: Merkaz
Join our team and lead the design, implementation and maintaining cutting-edge Data Platforms and Big Data Solutions. In this position, you will have the unique opportunity to significantly impact intelligence processes, contribute to the counter terrorism efforts and leveraging advanced Big Data technologies.
Design, install, maintain and upgrade Big Data technologies
Monitor and optimize system performance, identifying areas for improving and implementing solutions
Provide technical support to internal teams and end users, ensuring seamless operations
Design and implement advanced infrastructure solutions using the Hadoop Ecosystem
Analyze user requirements, evaluate and recommend new technologies in the Big Data domain.
Requirements:
Experience: At least 5 years managing Big Data infrastructure solutions
Past experience as a DBA with focus on Oracle, Redis and MongoDB is a plus
Proven experience with Hadoop, Cloudera, Trino. Presto, Vertica or similar technologies
Strong ability to evaluate and recommend technology options
Ability to diagnose and resolve complex database issues
Proficiency in SQL, Spark, and Linux environments
Excellent communication and interpersonal skills with the ability to collaborate effectively in a team
Thrive in a fast-paced, dynamic environment with a creative problem- solving mindset
Advantages: Familiarity with Python or Ansible.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8443662
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
03/11/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8397812
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 2 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced Solutions Data Engineer who possess both technical depth and strong interpersonal skills to partner with internal and external teams to develop scalable, flexible, and cutting-edge solutions. Solutions Engineers collaborate with operations and business development to help craft solutions to meet customer business problems.
A Solutions Engineer works to balance various aspects of the project, from safety to design. Additionally, a Solutions Engineer researches advanced technology regarding best practices in the field and seek to find cost-effective solutions.
Job Description:
Were looking for a Solutions Engineer with deep experience in Big Data technologies, real-time data pipelines, and scalable infrastructuresomeone whos been delivering critical systems under pressure, and knows what it takes to bring complex data architectures to life. This isnt just about checking boxes on tech stacksits about solving real-world data problems, collaborating with smart people, and building robust, future-proof solutions.
In this role, youll partner closely with engineering, product, and customers to design and deliver high-impact systems that move, transform, and serve data at scale. Youll help customers architect pipelines that are not only performant and cost-efficient but also easy to operate and evolve.
We want someone whos comfortable switching hats between low-level debugging, high-level architecture, and communicating clearly with stakeholders of all technical levels.
Key Responsibilities:
Build distributed data pipelines using technologies like Kafka, Spark (batch & streaming), Python, Trino, Airflow, and S3-compatible data lakesdesigned for scale, modularity, and seamless integration across real-time and batch workloads.
Design, deploy, and troubleshoot hybrid cloud/on-prem environments using Terraform, Docker, Kubernetes, and CI/CD automation tools.
Implement event-driven and serverless workflows with precise control over latency, throughput, and fault tolerance trade-offs.
Create technical guides, architecture docs, and demo pipelines to support onboarding, evangelize best practices, and accelerate adoption across engineering, product, and customer-facing teams.
Integrate data validation, observability tools, and governance directly into the pipeline lifecycle.
Own end-to-end platform lifecycle: ingestion → transformation → storage (Parquet/ORC on S3) → compute layer (Trino/Spark).
Benchmark and tune storage backends (S3/NFS/SMB) and compute layers for throughput, latency, and scalability using production datasets.
Work cross-functionally with R&D to push performance limits across interactive, streaming, and ML-ready analytics workloads.
Operate and debug object storebacked data lake infrastructure, enabling schema-on-read access, high-throughput ingestion, advanced searching strategies, and performance tuning for large-scale workloads.
Requirements:
24 years in software / solution or infrastructure engineering, with 24 years focused on building / maintaining large-scale data pipelines / storage & database solutions.
Proficiency in Trino, Spark (Structured Streaming & batch) and solid working knowledge of Apache Kafka.
Coding background in Python (must-have); familiarity with Bash and scripting tools is a plus.
Deep understanding of data storage architectures including SQL, NoSQL, and HDFS.
Solid grasp of DevOps practices, including containerization (Docker), orchestration (Kubernetes), and infrastructure provisioning (Terraform).
Experience with distributed systems, stream processing, and event-driven architecture.
Hands-on familiarity with benchmarking and performance profiling for storage systems, databases, and analytics engines.
Excellent communication skillsyoull be expected to explain your thinking clearly, guide customer conversations, and collaborate across engineering and product teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8442983
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
About the Role:We are seeking an experienced Senior Data Engineer to join our dynamic data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure, ensuring the availability, reliability, and quality of our data. This role requires strong technical expertise, problem-solving skills, and the ability to collaborate across teams to deliver data-driven solutions.Key Responsibilities:

Design, implement, and maintain robust, scalable, and high-performance data pipelines and ETL processes.
Develop and optimize data models, schemas, and storage solutions to support analytics and machine learning initiatives.
Collaborate with software engineers and product managers to understand data requirements and deliver high-quality solutions.
Ensure data quality, integrity, and governance across multiple sources and systems.
Monitor and troubleshoot data workflows, resolving performance and reliability issues.
Evaluate and implement new data technologies and frameworks to improve the data platform.
Document processes, best practices, and data architecture.
Mentor junior data engineers and contribute to team knowledge sharing.
Requirements:
Bachelors or Masters degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering, ETL development, or a similar role.
Strong proficiency in SQL and experience with relational and NoSQL databases.
Experience with data pipeline frameworks and tools such as: Apache Spark, Airflow & Kafka. - MUST
Familiarity with cloud platforms (AWS, GCP, or Azure) and their data services.
Solid programming skills in Python, Java, or Scala.
Strong problem-solving, analytical, and communication skills.
Knowledge of data governance, security, and compliance standards.
Experience with data warehousing, big data technologies, and data modeling best practices such as ClickHouse, SingleStore, StarRocks.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437853
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Herzliya
Job Type: Full Time
We are seeking an experienced Data Platform Engineer to join our Storage Analytics team. You will design and build data solutions that provide critical insights into storage performance and usage across Apple's entire device ecosystem.
Description:
Working with large-scale telemetry data from millions of Apple devices worldwide, you'll support multiple CoreOS Storage teams, including Software Update, Backup/Restore/Migration, Storage Attribution, and other storage domains.
Responsibilities
Design, build, and maintain scalable data processing infrastructure to handle large-scale telemetry from Apple's global device fleet
Develop highly scalable data pipelines to ingest and process storage performance metrics, usage patterns, and system telemetry with actionable alerting and anomaly detection
Build and maintain robust data platforms and ETL frameworks that enable CoreOS Storage teams to access, process, and derive insights from storage telemetry data
Engineer automated data delivery systems and APIs that serve processed storage metrics to various engineering teams across different storage domains
Requirements:
Bachelor's degree in Computer Science or related technical field
4+ years of professional experience in data modeling, pipeline development, and software
engineering
Programming Languages: excellent programming skills in Python with strong computer science foundations (data structures, low-level parallelization.
Database Management: Strong SQL skills and hands-on experience with relational databases and query engines (PostgreSQL, Impala, Trino)
Experience with data analysis tools and libraries (Pandas/Polars, NumPy, dbt)
Experience with big data technologies (Kafka, Spark, Databricks, S3)
Experience with Apache Airflow, Dagster, or similar data orchestration frameworks for workflow orchestration, scheduling, and monitoring
Experience with containerization and orchestration (Docker, Kubernetes /visualization & Reporting:
Strong proficiency with creating and maintaining Tableau/Grafana dashboards and workflows
Preferred Qualifications:
Master's or PhD in Computer Science or related field
Deep expertise in data principles, data architecture, and data modeling
Strong problem-solving skills and meticulous attention to detail, with the ability to tackle loosely
defined problem
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8407471
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of Lemonades data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives Lemonade toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll :
Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions

Collaborate closely with other Staff Engineers across Lemonade to align on cross-organizational initiatives and technical strategies

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas

A B.Sc. in Computer Science or a related technical field (or equivalent experience)

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases

Ability to work in an office environment a minimum of 3 days a week

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8420751
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
חברה חסויה
Location: Ra'anana
Job Type: Full Time
We are seeking a Senior Automation Engineer to join our Big Data team, where we build cutting-edge, AI-driven applications that empower the worlds largest financial institutions in the fight against financial crime.
Youll be part of a dynamic group responsible for the analytics and developer experience foundations, enabling software developers, data engineers, and data scientists to deliver scalable, high-quality analytics solutions quickly. In this role, youll ensure the quality, reliability, and performance of our intelligent cloud-based products through robust automated testing.
How will you make an impact?
Design, develop, and maintain automated tests for backend systems and data pipelines.
Implement and manage a scalable testing framework to support continuous integration and delivery.
Create test plans, and maintain detailed documentation of testing processes, results, and quality metrics.
Collaborate with developers and product managers to understand system architecture and business requirements.
Participate in requirements analysis, technical reviews, and test planning.
Identify, document, and track defects, drive resolution with development teams.
Monitor production environments, investigate anomalies, and push for timely fixes.
Ensure data health and correctness across systems.
Continuously improve QA processes, tools, and best practices.
Advocate for quality across the development lifecycle.
Requirements:
Sc. in Computer Science or related field (or equivalent experience).
5+ years of hands-on programming experience in Java, Python, JavaScript, C# or TypeScript.
Strong understanding of OOP/OOD principles and software design patterns.
Experience with AWS and cloud-native architectures.
Familiarity with unit testing, performance testing, and debugging techniques.
Hands-on experience with automation frameworks like Playwright, Selenium, or equivalent.
Proficiency with Git and collaborative development workflows.
Fast learner with excellent analytical and problem-solving skills.
Strong communication skills and a proactive, ownership-driven mindset.
Passion for building scalable, resilient systems in cloud environments.
Can-doer who can take a task from plan to completion
Team player, excellent communication, and presentation skills
What Would Make You Stand Out?
Experience with Big Data technologies such as Kafka, Flink, Spark
Exposure to DevOps practices and tools like Jenkins, Terraform
Background in data quality assurance and monitoring
Experience with test management tools such as Jira, X-Ray, or similar.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8441100
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
our company's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to our companys needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
The Waze Engineering Productivity (EngProd) team is the multiplier for all of Waze engineering.
In this role, you will design, build, and own the foundational systems that empower developers to ship with speed, quality, and confidence. You will be maintaining systems and leading the charge on Waze's technical initiatives.
Waze is where people and technology meet to solve transportation challenges. It's a platform that empowers users to contribute road data and edit Waze maps to improve the way we move about the world. As the social navigation pioneer, Waze leverages mobile technology and a passionate global community to redefine expectations of todays maps.
Responsibilities
Develop and maintain back-end services and libraries, in the Java ecosystem, that form the support of the development environment.
Take ownership of components within technical initiatives, such as the company-wide migration to our company3 or the rollout of new Artificial Intelligence (AI)-powered developer tools.
Deploy and manage mission-critical services on our company Cloud Platform (GCP) using technologies like Kubernetes and Docker, ensuring high availability and performance.
Collaborate with engineering teams across Waze to understand their issues, gather requirements, and deliver solutions that make their workflows efficient.
Work with Java, Python and our company's internal tooling to select the right technology for the job.
Requirements:
Minimum qualifications:
Bachelors degree or equivalent practical experience.
2 years of experience with software development or 1 year of experience with an advanced degree in an industry setting.
2 years of experience with developing large-scale infrastructure, distributed systems or networks, or with compute technologies, storage or hardware architecture.
2 years of experience in software development with software design, architecture, and shipping production-grade systems.
Preferred qualifications:
Experience in Java and building production-grade back-end services and distributed systems.
Experience with public cloud platforms and container orchestration technologies like Kubernetes.
Experience with developer productivity, tooling, and building infrastructure.
Experience with build systems (e.g., Gradle, Bazel), Continuous Integration/Continuous Deployment (CI/CD) pipelines, and other developer-focused tooling.
Experience with leading technical projects from design to completion, with excellent architectural and system design skills.
Excellent problem-solving, communication, and collaboration skills, with the ability to work across multiple engineering teams.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8412759
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
This role provides the opportunity to lead end-to-end design & implementation of advanced server-side features using a wide range of technologies: Apache Spark, Apache Airflow, Scala, k8s, Node.js (JS/Typescript) with MongoDB, Redis, Kafka, Dockers and Flink (Big Data Stream Processing)

If you are up for the challenge and you have the XM factor, come and join us!
Requirements:
5+ years of experience in software development with proven ability to take full responsibility and lead advanced software projects that require team collaboration.
Capable of facing a wide range of cutting edge technologies and challenging development tasks, designing new features from scratch and diving into existing infrastructure.
2+ years experience of spark with scala/Python - Must.
Experience in server-side development with APIs, Microservices Architecture (Docker), databases, caches, queues.
Experience in delivering fully tested production-level code using CI/CD pipeline and maintaining large-scale production systems.
Highly motivated leader with a can-do approach and strong interpersonal skills that thrives in a fast-paced startup environment.
Relevant Cyber Security experience - Advantage
Experience in cloud development (AWS / Azure / GCP) - Advantage
Experience with k8s operator, spark and airflow - Big Advantage
Experience with Node.js (JS/Typescript) - Advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8437855
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/11/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a talented and dedicated ML & Big Data Analyst who will be a part of the Core Data group.
As part of this role, you will help us process and streamline rich and complicated datasets. Will explore and validate new data sources. Be part of the data science and data engineering force creating new features for our solution.
What does the day-to-day of a ML & Big Data Analyst at our company look like:
Apply your expertise in quantitative analysis and data mining to turn data into insights.
Conduct research and develop tools that will ensure the quality of our companys data and algorithms.
Utilize your deep understanding of our companys data in order to evaluate and improve novel algorithms produced by the data scientists in your team.
Partner with Big Data Engineers to establish an infrastructure to scale solutions.
Requirements:
At least 2 years of work experience in a Tech Company - Must
At least 2 years experience in Python scripting - Must
Experience with SQL for data analysis -Must
Knowledge and understanding of the statistical principles, concepts, methods, and standards- Must
Hadoop/Spark/AWS Infrastructure experience - Big Advantage
Ability to visualize data and experience with a visualization tool and modules (mlplot, matplolib, sns)- Big Advantage
Excellent verbal and communication skills (in English) - Must
Team player, autodidact, fast learner with excellent analytical, research and business skills.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8413718
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
7 ימים
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a passionate and highly skilled Senior Golang Developer to join our dynamic Software Engineering team. The ideal candidate is an expert Go engineer with a strong background in building scalable, high-performance distributed backend systems on the cloud. You should be enthusiastic about working with cutting-edge technologies in an Agile environment and capable of owning complex components end-to-end, taking an idea through design, testing, and production monitoring.
What will you do:
Responsible for the full lifecycle of our distributed services, focusing on performance, reliability, and innovation: Design & Development
* Architect, design, and implement robust backend services and distributed system components using Go, ensuring high scalability, reliability, and performance.
* Develop microservices and server side infrastructure with clean, maintainable, and testable code.
*  system Ownership: Take full ownership of complex components, moving them from initial concept design through development, testing, CI/CD, and production deployment and monitoring. system Optimization & data
* Optimize service performance, parallel processing, concurrency handling, and resource utilization in high-scale environments.
* Work with a variety of data stores and event-streaming technologies.
* Integrate services with technologies like OpenSearch, Redis, and column store databases as part of the system architecture.
* Implement secure development practices and production ready standards. Collaboration & DevOps
* Work closely with data Engineering, DevOps, and Product teams to deliver high quality, production grade services.
* Develop and deploy applications in Kubernetes environments, leveraging strong DevOps best practices and CI/CD pipelines for smooth, reliable releases.
* Participate actively in code reviews, design discussions, and cross team technical initiatives. Innovation
* Explore new technologies, evaluate tools, and propose improvements to the existing stack and development processes, championing a culture of continuous improvement and technical excellence.
Requirements:
* 5+ years of backend software development experience.
* 3+ years of hands on experience developing production services in Golang at a senior level.
* Deep understanding of distributed system concepts, microservices architecture, concurrency, and performance optimization in Go
* Strong experience building large, fault tolerant systems on Cloud platforms (AWS ecosystem experience is a significant advantage)
* Experience with SQL and NoSQL databases
* Hands-on experience with event streaming technologies (Kafka)
* Expertise with Docker and proven experience deploying and managing applications in Kubernetes in production environments.
* Solid understanding of CI/CD pipelines, unit/integration testing, and Agile methodologies
* Bachelors or Masters degree in Computer Science or experience in a related technical field
* Strong ownership mentality, collaboration skills, and the ability to lead by technical example
* Ability to operate effectively in a fast paced, innovative environment, driving technical solutions autonomously
* Excellent verbal and written communication skills in English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8432929
סגור
שירות זה פתוח ללקוחות VIP בלבד