דרושים » דאטה » Data Infra Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Infra Engineer
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
Mandatory Qualifications:
5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Componments Management: Experience with managing and design data infrasturacture such as Snowflake, PostgreSQL, Kafka, Aerospike, Object Store.
DevOps Expertise: Proven experience in creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP) languages with an emphasis on the big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience in designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8114395
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required DevOps (Data Platform Group)
Main responsibilities:
Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation.
Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes.
Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems.
DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes.
Our stack: Azure, GCP, Kubernetes, ArgoCD, Jenkins, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Micro-Services, bash, Python, SQL.
Requirements:
Mandatory Qualifications:
3+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills.
Data Componments Management: Experience with managing and design data infrasturacture such as Snowflake, PostgreSQL, Kafka, Aerospike, Object Store.
DevOps Expertise: Proven experience in creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx etc.
Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP) languages with an emphasis on the big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks.
Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS.
Preferred Qualifications:
Performance Optimization: Experience in optimizing performance for big data tools and pipelines - Big Advantage.
Security Expertise: Experience in identifying and addressing security vulnerabilities within the data platform - Big Advantage.
CI/CD Pipelines: Experience in designing, implementing, and maintaining Continuous Integration/Continuous Deployment (CI/CD) pipelines Advantage.
Data Pipelines: Experience in building big data pipelines - Advantage.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8114396
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows
Requirements:
5+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack - Must:
Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Kafka, RabbitMQ, or similar for real-time data processing.
Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. - Must
Hands-on experience with Docker and Kubernetes. - Must
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
Our stack: Azure, GCP, Databricks, Snowflake, Airflow, Spark, Kafka, Kubernetes, Neo4J, AeroSpike, ELK, DataDog, Micro-Services, Python, SQL
Your stack: Proven strong back-end software engineering skills, ability to think for yourself and challenge common assumptions, commitment to high-quality execution, and embrace collaboration.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8114405
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/04/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking to hire a talented, self-driven and passionate Senior Infrastructure Engineer to build and maintain the cloud infrastructure for our highly available SaaS application as well as our machine learning and data engineering stack.

As a Senior Infrastructure Engineer, you will be responsible for designing, implementing, and maintaining the cloud infrastructure and DevOps processes that power our products and internal tooling. You will work closely with all data and development teams and lead the companys security and compliance vectors. You will ensure a highly reliable, scalable, and secure infrastructure that supports our rapid growth and product innovation, while maintaining observability and cost-effectiveness of our cloud resources and data.

Responsibilities:
Cloud Infrastructure Management: Architect, deploy, and manage our cloud infrastructure (AWS), ensuring high availability, scalability, and security.
Software Engineering: Be a top notch SW engineer, harnessing your coding and architectural skills, as well as researching skills, for our infra stack.
Infrastructure as Code (IaC): Define and maintain infrastructure using tools like Terraform, CloudFormation, or Pulumi to manage resources efficiently and reproducibly.
Monitoring & Incident Management: Build and manage monitoring and alerting systems to ensure uptime, and respond to incidents with root cause analysis and remediation.
DevOps & Automation: Implement and maintain CI/CD pipelines to streamline development workflows and automate deployment processes across development, staging, and production environments, and across different parts of our solution. While our development teams are expected to write and maintain their own CI, you will act as a supervisor and professional authority, and maintain cross team and complex automation.
Collaboration and technical leadership: Partner with software engineers, data engineers, and machine learning teams to support their infrastructure needs and guide the evolution of our infrastructure team.
Cost Optimization: Monitor cloud spend and optimize resources to ensure cost-effective infrastructure without sacrificing performance or security.
Security & Compliance: Implement security best practices, including access control, network security, monitoring and ensuring the infrastructure is compliant with relevant industry standards (e.g., SOC2, GDPR).
דרישות:
Requirements:
Experience: 5+ years of hands-on experience in cloud infrastructure, DevOps and platform engineering in production environments.
Cloud Platforms and IaC: Expertise in managing cloud infrastructure on at least one of the major providers: AWS, GCP, Azure. Proficient in Infrastructure as Code tools such as Terraform, CloudFormation, or Pulumi.
Containerization & Orchestration: Solid experience with Docker and Kubernetes.
Monitoring & Logging: Hands-on experience with monitoring tools (Prometheus, Grafana) and logging systems (ELK, Splunk, or equivalent).
Software Engineering: Proficient Software engineering, architecture, as well as scripting languages such as Python, Bash, or Go. Full control of version control systems such as Git.
DevOps Tools: Strong experience with CI/CD pipelines and automation using Jenkins, CircleCI, GitHub Actions, GitLab CI, or similar.
Networking: Strong understanding of cloud networking, VPNs, VPCs, DNS, and firewalls.
Security Best Practices: Experience implementing cloud security best practices, including IAM, encryption, and key management.
Startup Experience: Previous experience in a fast-paced startup environment, where adaptability and hands-on execution are key.
Team Player: Strong communication skills and ability to work cross-functionally with different teams.

Advantages:
ML Infrastructure: Experience supporting machine learning pipelines and deploying ML models to production environments.
Data Engineering: Familiarity with data engineering tools like Apache Spark, Airflow, or similar.#EN המשרה מיועדת לנשים ולגברים כאחד.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8131896
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a highly skilled DevOps Engineer to join the TLV Foundation Services Team within the BDC organization. In this role, you will be responsible for designing, implementing, and maintaining scalable, highly available production systems. You will collaborate with developers and global infrastructure teams to ensure seamless deployment, automation, and system reliability.
What Youll Do:
System Reliability & Scalability: Build and maintain highly available, scalable, and resilient production systems.
Automation & Infrastructure as Code: Develop and manage infrastructure using Terraform and other IaC tools.
Cloud Infrastructure Management: Configure and manage AWS, Azure, or similar cloud platforms to optimize performance and cost efficiency.
Containerization & Orchestration: Work extensively with K8s and Istio to manage containerized environments.
CI/CD Pipelines: Design, build, and maintain CI/CD automation using Jenkins, GitHub, or similar tools.
Monitoring & Logging: Implement and manage observability tools for monitoring, logging, and metrics collection in large-scale production environments.
Incident Management: Rapidly identify and resolve production issues, ensuring minimal downtime.
Security & Compliance: Implement best practices for security, access control, and compliance within cloud and on-prem environments.
Collaboration: Work closely with developers and infrastructure teams to streamline deployment, automation, and operations.
Support & Maintenance: Provide on-call support as needed to ensure system reliability.
Requirements:
3+ years of DevOps experience in a cloud-based production environment.
Strong expertise in Docker, K8s, and containerized application management.
Experience with AWS, Azure, or similar cloud platforms.
Hands-on experience with Infrastructure as Code (IaC) tools like Terraform.
Proficiency in CI/CD automation, including Jenkins, GitHub Actions, or similar
Knowledge of monitoring and logging tools
Strong scripting skills in Bash and experience with programming languages like Python or Java.
Experience with GitOps methodologies.
Familiarity with Istio and service mesh architectures.
Bonus Points For:
Hands-on experience managing data lakes or data warehouses.
Prior experience in Enterprise environments.
Strong problem-solving abilities and a passion for learning new technologies.
Ability to thrive in a fast-paced, dynamic environment and tackle challenges head-on.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8120388
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
06/04/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
A growing tech company in the automotive space with hubs across the US and Israel. Our mission is to constantly disrupt the industry by creating groundbreaking technologies to help dealers build stronger, more resilient businesses. Our work happens in the fast lane as we bring AI-powered, data-driven solutions to a quickly evolving industry.

Our team consists of curious and creative individuals who are always looking to achieve the impossible. We are bold, collaborative, and goal-driven, and at our core, we believe every voice has value and can impact our bottom line.
If you are a creative, solutions-oriented individual who is ready to put your career in drive,the place for you!

We are looking for an experienced Data Engineering Tech Lead to join our team and make a real impact! In this hands-on role, you will drive the architecture, development, and optimization of our Data infrastructure, ensuring scalable and high-performance data solutions that support analytics, AI, and business intelligence needs. You will collaborate closely with analysts, Product, DevOps, and software engineers to cultivate a robust data ecosystem.

This position will report to the CISO and can be based out of Jerusalem or Tel-Aviv.

What you will be responsible for
Lead the design and implementation and maintenance of our DWH & Data Lake architecture to support both analytical and operational use cases.
Develop scalable ETL/ELT pipelines for ingestion, transformation, and optimization of structured and unstructured data.
Ensure data quality, governance, and security throughout the entire data lifecycle.
Optimize performance and cost-efficiency of data storage, processing, and retrieval.
Work closely with BI and analytics teams to guarantee seamless data integration with visualization tools.
Collaborate with stakeholders (BI teams, Product, and Engineering) to align data infrastructure with business needs.
Mentor and guide analysts, fostering a culture of best practices and professionalism.
Stay updated with industry trends and evaluate new technologies for continuous improvement.
Requirements:
5+ years of experience in data engineering, with at least 2-3 years experience in a Tech Lead role.
At least 3 years of hands-on experience with AWS, including services like S3, Redshift, Glue, Athena, Lambda, and RDS.
Expertise in DWH & Data Lake architectures, including columnar databases, data partitioning, and lakehouse concepts.
Strong experience with cloud data solutions like Redshift, Snowflake, BigQuery, or Databricks.
Proficiency in ETL/ELT tools (e.g., dbt, Apache Airflow, Glue, Dataflow).
Deep knowledge of SQL & Python for data processing and transformation.
Experience working with BI and visualization tools such as Power BI, Tableau, Looker, or similar.
Experience with real-time data streaming (Kafka, Kinesis, Pub/Sub) and batch processing.
Understanding of data modeling (Star/Snowflake), data governance, and security best practices.
Experience with CI/CD, infrastructure-as-code (Terraform, CloudFormation), and DevOps for data.
The personal competencies you need to have:
Excellent communication skills and the ability to work as a team.
Strong sense of ownership, urgency, and drive.
Ability to take the initiative, come up with ideas and solutions, and run it with a "getting things done" attitude.
Ability to work independently and manage tight deadlines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8129541
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
27/03/2025
חברה חסויה
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
We are looking for an experienced Data Engineer with expertise in cloud technologies, big data, and distributed systems to join our data engineering team.
This role requires the experience and skills to design and build key components and infrastructure for our global data teams (Data engineering, BI, Data science) with experience in designing, building, and maintaining streaming data pipelines and data lake architectures, together with hands-on expertise with technologies like Apache Spark, Kafka, and cloud-based data lake implementations.
As a Data Engineer you will
Build Infrastructure to empower our Engineers/Data Scientists/BI teams to work by best practices of data processing
Work in a high-volume production environment
Develop and manage ETL/ELT processes for structured and unstructured data
Collaborate with colleagues both locally and in remote locations
Influence the software architecture and working procedures for building data and analytics
Ensure data quality, integrity, and security within the data pipeline and data lake
Monitor, troubleshoot, and optimize data workflows to improve performance and reliability.
Requirements:
4+ years in Data/Backend engineering with experience in designing, developing and optimizing streaming data pipelines using Apache Spark, Kafka, or similar technologies.
Dealing with data on high volume, high availability production systems
Practical experience with Python in the domain of data pipelines.
Experience with cloud-based data lake architectures (AWS S3, Google Cloud Storage).
Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
Excellent problem-solving skills and the ability to work in a collaborative environment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8118203
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
28/03/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
About the Role Appdome is building a new data Department, and were looking for a skilled data Engineer to help shape our data infrastructure. If you thrive in fast-paced environments, take ownership, and enjoy working on scalable data solutions, this role is for you. You'll have the opportunity to grow, influence key decisions, and collaborate with security experts and product teams. What Youll Do
* Design, build, and maintain scalable data pipelines, ETL processes, and data infrastructure.
* Optimize data Storage and retrieval for structured and unstructured data.
* Integrate data solutions into Appdomes products in collaboration with software engineers, security experts, and data scientists.
* Apply DevOps best practices (CI/CD, infrastructure as code, observability) for efficient data processing.
* Work with AWS (EC2, Athena, RDS) and ElasticSearch for data indexing and retrieval.
* Optimize and maintain SQL and NoSQL databases.
* Utilize Docker and Kubernetes for containerization and orchestration.
Requirements:
* B.Sc. in Computer Science, data Engineering, or a related field.
* 3+ years of hands-on experience in large-scale data infrastructures.
* Strong Python programming, with expertise in PySpark and Pandas.
* Deep knowledge of SQL and NoSQL databases, including performance optimization.
* Experience with ElasticSearch and AWS cloud services.
* Solid understanding of DevOps practices, Big Data tools, Git, and Jenkins.
* Familiarity with microservices and event-driven design.
* Strong problem-solving skills and a proactive, independent mindset. Advantages
* Experience with LangChain, ClickHouse, DynamoDB, Redis, and Apache Kafka.
* Knowledge of Metabase for data visualization.
* Experience with RESTful APIs and Node.js. Talent We Are Looking For Independent & Self-Driven Comfortable building from the ground up. Growth-Oriented Eager to develop professionally and take on leadership roles. Innovative Passionate about solving complex data challenges. Collaborative Strong communicator who works well with cross-functional teams. Adaptable Thrives in a fast-paced, dynamic environment with a can-do attitude. About the Company: Appdome's mission is to protect every mobile app worldwide and its users. We provide mobile brands with the only patented, centralized, data -driven Mobile Cyber Defense Automation platform. Our platform delivers rapid no-code, no-SDK mobile app security, anti-fraud, anti-malware, anti-cheat, anti-bot implementations, configuration as code ease, Threat-Events threat-aware UI / UX control, ThreatScope Mobile XDR, and Certified Secure DevSecOps Certification in one integrated system. With Appdome, mobile Developers, cyber and fraud teams can accelerate delivery, guarantee compliance, and leverage automation to build, TEST, release, and monitor the full range of cyber, anti-fraud, and other defenses needed in mobile apps from within mobile DevOps and CI/CD pipelines. Leading financial, healthcare, m-commerce, consumer, and B2B brands use Appdome to upgrade mobile DevSecOps and protect Android & IOS apps, mobile customers, and businesses globally. Today, Appdome's customers use our platform to secure over 50,000+ mobile apps, with protection for over 1 billion mobile end users projected.
Appdome is an Equal Opportunity Employer. We are committed to diversity, equity, and inclusion in our workplace. We do not discriminate based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other characteristic protected by law. All qualified applicants will receive consideration for employment without regard to any of these characteristics.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8118270
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/04/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a highly experienced DevOps Engineering Manager to lead our team. The DevOps Engineering Manager will be responsible for leading and shaping the implementation of DevOps practices across our organization. This role focuses on strategic automation, continuous integration, and delivery, infrastructure as code, and DevOps culture adoption. The ideal candidate will possess a strong background in software development, cloud architecture, and system administration, with exceptional leadership and collaboration skills to drive cross-functional initiatives.
Lead and manage a team of DevOps engineers, providing mentorship, guidance, and support.
Develop and implement advanced DevOps strategies and practices to enhance efficiency and reliability.
Collaborate with software engineering teams to integrate DevOps practices into the development process.
Architect, build and maintain deployment pipelines and automation tools for software releases
Oversee the implementation and support of system and application security measures.
Develop and maintain infrastructure as code using tools like Terraform.
Monitor and troubleshoot production systems and implement automated remediation techniques
Develop and maintain documentation for infrastructure, processes, and procedures
Stay abreast of emerging technologies and trends in DevOps, cloud computing, and software development, and drive their adoption as appropriate.
Requirements:
7+ years of experience in DevOps or related fields, with experience in a leadership role.
Proven experience with container orchestration technologies like Docker and Kubernetes or Swarm.
Extensive experience with cloud computing platforms like AWS, Azure, or Google Cloud (Google Cloud is an advantage).
Experience with configuration management tools like Terraform, Ansible, Puppet, or Chef.
Experience with package managers for Kubernetes tools like Helm, Kustomize, and Kompose (Helm an advantage).
Advanced knowledge of CI/CD tools like GitLab CI/CD, Circle CI, Jenkins, or Argo CD.
Experience with database administration and management.
Advanced Bash scripting skills.
Proficiency in at least one programming language, such as Python or Go (a must).
Experience with monitoring and logging tools like Prometheus, Grafana, or Datadog.
Experience with microservices architecture, API gateways, or Reverse Proxy such as NGINX (an advantage).
Excellent communication and interpersonal skills, with the ability to influence and drive cross-functional initiatives.
Demonstrated leadership experience, including mentoring and guiding team members.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8130969
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a Senior DevOps Engineer to collaboratively design, maintain, and expand cloud infrastructure with an automation-first mindset. As an active participant throughout the software development lifecycle, individuals in this role consider security, efficiency, and sustainability with every decision and are prepared to navigate a dynamic technological and business landscape.

Your Responsibilities:

Design, implement, and maintain cloud infrastructure using best practices on platforms such as AWS.
Develop and maintain CI/CD pipelines for automated build, test, and deployment processes
Collaborate with development teams to optimize application performance, scalability, and reliability
Automate infrastructure provisioning, configuration management, and monitoring using tools like Terraform
Implement and maintain monitoring, logging, and alerting solutions to ensure system health and performance
Troubleshoot issues across the entire stack, from network and infrastructure to application code
Continuously evaluate and implement improvements to streamline processes and increase efficiency
Ensure compliance with security standards and best practices in all aspects of infrastructure and deployment processes.
Mentor junior members of the team and provide technical guidance and support.
Innovate beyond standard operational tasks by actively designing how they are executed.
Requirements:
Minimum of 5 years of experience in the DevOps, with a proven track record of designing and implementing scalable and reliable infrastructure.
In-depth knowledge of cloud computing platforms such as AWS, Azure, or GCP, including services like EC2, S3, RDS, VPC, etc.
Strong experience with containerization and orchestration technologies such as Docker, Kubernetes, or ECS. Proficiency in scripting and programming languages such as Python, Shell, or Go.
Experience with CI/CD tools such as Jenkins, GitLab CI/CD, or CircleCI.
Solid understanding of networking concepts, security best practices, and infrastructure-as-code principles.
Excellent problem-solving skills and ability to troubleshoot complex issues across multiple systems. Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, dynamic environment.
Outstanding communication skills (both oral and written) in Hebrew and English.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8133348
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
4 ימים
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer (Analytics)
Tel Aviv
As a Senior Data Engineer, you will play a key role in shaping and driving our analytics data pipelines and solutions to empower business insights and decisions. Collaborating with a variety of stakeholders, you will design, develop, and optimize scalable, high-performance data analytics infrastructures using modern tools and technologies. Your work will ensure data is accurate, timely, and actionable for critical decision-making.
Key Responsibilities:
Lead the design, development, and maintenance of robust data pipelines and ETL processes, handling diverse structured and unstructured data sources.
Collaborate with data analysts, data scientists, product engineers and product managers to deliver impactful data solutions.
Architect and maintain the infrastructure for ingesting, processing, and managing data in the analytics data warehouse.
Develop and optimize analytics-oriented data models to support business decision-making.
Champion data quality, consistency, and governance across the analytics layer.
What your day might look like:
Leading the design and implementation of scalable data pipelines to support analytical workloads.
Collaborating with stakeholders to gather requirements, propose solutions, and align on data strategies.
Writing and optimizing ETL processes to ensure seamless integration of new data sources.
Designing analytics-focused data modeling solutions tailored for strategic decision-making.
Troubleshooting data issues and implementing measures to improve system reliability and accuracy.
Sharing knowledge and mentoring team members to promote a culture of learning and excellence.
This is an opportunity to make a significant impact by enabling data-driven decision-making at scale while growing your career in a dynamic, collaborative environment.
Requirements:
5+ years of experience as a Data Engineer or in a similar role.
Expertise in SQL and proficiency in Python for data engineering tasks.
Proven experience designing and implementing analytics-focused data models and warehouses.
Hands-on experience with data pipelines and ETL/ELT frameworks (e.g Airflow, Luigi, AWS Glue, DBT).
Strong experience with cloud data services (e.g., AWS, GCP, Azure).
A deep passion for data and a strong analytical mindset with attention to detail.
Bonus points:
Strong understanding of business metrics and how to translate data into actionable insights
Experience with data visualization tools (e.g., Tableau, Power BI, Looker)
Familiarity with data governance and data quality best practices
Excellent communication skills to work with cross-functional teams including data analysts, data scientists, and product managers.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8153675
סגור
שירות זה פתוח ללקוחות VIP בלבד