דרושים » דאטה » Senior Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 3 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Join a team of skilled data engineers building sophisticated data pipelines connecting a variety of systems through streaming technologies, cloud services, and microservices.

As a Senior Data Engineer, youll play a key role in shaping the infrastructure powering our data ecosystem. Youll design, build, and maintain scalable data pipelines and automation processes, enabling reliable, efficient, and observable systems.

This is a hands-on role that combines infrastructure, data, and DevOps expertise - perfect for someone who thrives on learning new technologies, leading initiatives, and driving excellence in system design and delivery.

Responsibilities:
Design and maintain robust infrastructure for large-scale data processing and streaming systems.
Develop automation and deployment processes using CI/CD pipelines.
Build and operate Kubernetes-based environments and containerized workloads.
Collaborate with data engineers to optimize performance, cost, and reliability of data platforms.
Design and develop REST-API microservices.
Troubleshoot and resolve complex issues in production and staging environments.
Drive initiatives that enhance observability, scalability, and developer productivity.
Lead by example - share knowledge, mentor teammates, and promote technical best practices.
Requirements:
Requirements:
5 years of experience as a Data Engineer, Backend Developer, or DevOps.
5+ years of experience with Python/Java microservices (Flask, Spring, Dropwizard) and component testing.
Deep understanding of Kubernetes, Docker, and container orchestration.
Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
Proven experience with Snowflake, MySQL, RDS, or similar databases.
Familiarity with streaming systems(e.g., Kafka, FireHose), databases, or data pipelines.
Self-learner, proactive, and passionate about improving systems and automation.
Strong communication skills and a collaborative, team-oriented mindset.

Advantages:
Experience with Kafka, Airflow or other data processing tools.
Knowledge of Terraform, Pulumi, or other IaC frameworks.
Familiarity with Datadog, Prometheus or other observability tools.
Experience with AWS (Lambda, EKS, EC2, Step functions, SQS).
Working with or building AI-driven tools.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8378011
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
1 ימים
דרושים באלעד מערכות
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
data Infrastructure Engineer Elad Systems | data Division
Elad Systems data Division is seeking a skilled data Engineer to join our growing team, supporting end-to-end AI-driven projects for top-tier clients across industries.
What Youll Do:
Design and build robust, scalable data pipelines for AI/ML applications
Work with modern cloud environments (AWS/Azure/GCP), leveraging tools like Airflow, Kafka, EC2 and Kubernetes
Develop Infrastructure as Code (IAC) with Terraform or equivalent
Support data lakes and DWHs (e.g., Snowflake, BigQuery, Redshift)
Ensure data quality, observability, and system reliability
Requirements:
What You Bring:
3+ years of experience in data engineering or backend development
Proficiency in Python
Hands-on experience with streaming, orchestration, and cloud-native tools
Strong problem-solving skills and an independent, delivery-focused mindset
Experience with Docker and CI/CD platform
Have a solid background working with data warehousing technologies, such as Snowflake, Databricks, Redshift, BigQuery, etc.
Advantage: Experience supporting ML pipelines or AI product development


Why Elad:
Work on diverse, high-impact AI projects in a dynamic and collaborative environment
Access to learning programs, career growth paths, and cutting-edge tech
Hybrid work model and flexible conditions
Join us and help shape the data foundations of tomorrows AI solutions.
This position is open to all candidates.
 
Show more...
הגשת מועמדות
עדכון קורות החיים לפני שליחה
8294550
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for an experienced and passionate Staff Data Engineer to join our Data Platform group in TLV as a Tech Lead. As the Groups Tech Lead, youll shape and implement the technical vision and architecture while staying hands-on across three specialized teams: Data Engineering Infra, Machine Learning Platform, and Data Warehouse Engineering, forming the backbone of our data ecosystem.

The groups mission is to build a state-of-the-art Data Platform that drives us toward becoming the most precise and efficient insurance company on the planet. By embracing Data Mesh principles, we create tools that empower teams to own their data while leveraging a robust, self-serve data infrastructure. This approach enables Data Scientists, Analysts, Backend Engineers, and other stakeholders to seamlessly access, analyze, and innovate with reliable, well-modeled, and queryable data, at scale.

In this role youll:

Technically lead the group by shaping the architecture, guiding design decisions, and ensuring the technical excellence of the Data Platforms three teams.

Design and implement data solutions that address both applicative needs and data analysis requirements, creating scalable and efficient access to actionable insights.

Drive initiatives in Data Engineering Infra, including building robust ingestion layers, managing streaming ETLs, and guaranteeing data quality, compliance, and platform performance.

Develop and maintain the Data Warehouse, integrating data from various sources for optimized querying, analysis, and persistence, supporting informed decision-makingLeverage data modeling and transformations to structure, cleanse, and integrate data, enabling efficient retrieval and strategic insights.

Build and enhance the Machine Learning Platform, delivering infrastructure and tools that streamline the work of Data Scientists, enabling them to focus on developing models while benefiting from automation for production deployment, maintenance, and improvements. Support cutting-edge use cases like feature stores, real-time models, point-in-time (PIT) data retrieval, and telematics-based solutions.

Collaborate closely with other Staff Engineers across us to align on cross-organizational initiatives and technical strategies.

Work seamlessly with Data Engineers, Data Scientists, Analysts, Backend Engineers, and Product Managers to deliver impactful solutions.

Share knowledge, mentor team members, and champion engineering standards and technical excellence across the organization.
Requirements:
8+ years of experience in data-related roles such as Data Engineer, Data Infrastructure Engineer, BI Engineer, or Machine Learning Platform Engineer, with significant experience in at least two of these areas.

A B.Sc. in Computer Science or a related technical field (or equivalent experience).

Extensive expertise in designing and implementing Data Lakes and Data Warehouses, including strong skills in data modeling and building scalable storage solutions.

Proven experience in building large-scale data infrastructures, including both batch processing and streaming pipelines.

A deep understanding of Machine Learning infrastructure, including tools and frameworks that enable Data Scientists to efficiently develop, deploy, and maintain models in production, an advantage.

Proficiency in Python, Pulumi/Terraform, Apache Spark, AWS, Kubernetes (K8s), and Kafka for building scalable, reliable, and high-performing data solutions.

Strong knowledge of databases, including SQL (schema design, query optimization) and NoSQL, with a solid understanding of their use cases.

Ability to work in an office environment a minimum of 3 days a week.

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8358644
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
11/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
we are looking for a Senior Data Engineer
What You'll Do:

Shape the Future of Data - Join our mission to build the foundational pipelines and tools that power measurement, insights, and decision-making across our product, analytics, and leadership teams.
Develop the Platform Infrastructure - Build the core infrastructure that powers our data ecosystem including the Kafka events-system, DDL management with Terraform, internal data APIs on top of Databricks, and custom admin tools (e.g. Django-based interfaces).
Build Real-time Analytical Applications - Develop internal web applications to provide real-time visibility into platform behavior, operational metrics, and business KPIs integrating data engineering with user-facing insights.
Solve Meaningful Problems with the Right Tools - Tackle complex data challenges using modern technologies such as Spark, Kafka, Databricks, AWS, Airflow, and Python. Think creatively to make the hard things simple.
Own It End-to-End - Design, build, and scale our high-quality data platform by developing reliable and efficient data pipelines. Take ownership from concept to production and long-term maintenance.
Collaborate Cross-Functionally - Partner closely with backend engineers, data analysts, and data scientists to drive initiatives from both a platform and business perspective. Help translate ideas into robust data solutions.
Optimize for Analytics and Action - Design and deliver datasets in the right shape, location, and format to maximize usability and impact - whether thats through lakehouse tables, real-time streams, or analytics-optimized storage.
You will report to the Data Engineering Team Lead and help shape a culture of technical excellence, ownership, and impact.
Requirements:
5+ years of hands-on experience as a Data Engineer, building and operating production-grade data systems.
3+ years of experience with Spark, SQL, Python, and orchestration tools like Airflow (or similar).
Degree in Computer Science, Engineering, or a related quantitative field.
Proven track record in designing and implementing high-scale ETL pipelines and real-time or batch data workflows.
Deep understanding of data lakehouse and warehouse architectures, dimensional modeling, and performance optimization.
Strong analytical thinking, debugging, and problem-solving skills in complex environments.
Familiarity with infrastructure as code, CI/CD pipelines, and building data-oriented microservices or APIs.
Enthusiasm for AI-driven developer tools such as Cursor.AI or GitHub Copilot.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8343346
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking a skilled Data/ Backend Engineer to design and implement complex, high-scale systems that retrieve, process, and analyze data from the digital world. This role involves developing and maintaining backend infrastructure, creating robust data pipelines, and ensuring the seamless operation of data-driven products and services.
Key Responsibilities:
- Design and build high-scale systems and services to support data infrastructure and production systems.
- Develop and maintain data processing pipelines using technologies such as PySpark, Hadoop, and Databricks.
- Implement dockerized high-performance microservices and manage their deployment.
- Monitor and debug backend systems and data pipelines to identify and resolve bottlenecks and failures.
- Work collaboratively with data scientists, analysts, and other engineers to develop and maintain data-driven solutions.
- Ensure data is ingested correctly from various sources and is processed efficiently.
Requirements:
- BSc degree in Computer Science or equivalent practical experience.
- At least 4+ years of server-side software development experience in languages such as Python, Java, Scala, or Go.
- Experience with Big Data technologies like Hadoop, Spark, Databricks, and Airflow.
- Familiarity with cloud environments such as AWS or GCP and containerization technologies like Docker and Kubernetes.
- Strong problem-solving skills and ability to learn new technologies quickly.
- Excellent communication skills and ability to work in a team-oriented environment.
Nice to Have:
- Experience with web scraping technologies.
- Familiarity with Microservices architecture and API development.
- Knowledge of databases like Redis, PostgreSQL, and Firebolt.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8341643
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As part of the Data Infrastructure group, youll help build Lemonades data platform for our growing stack of products, customers, and microservices.

We ingest our data from our operational DBs, telematics devices, and more, working with several data types (both structured and unstructured). Our challenge is to provide building tools and infrastructure to empower other teams leveraging data-mesh concepts.

In this role youll:
Help build Lemonades data platform, designing and implementing data solutions for all application requirements in a distributed microservices environment.

Build data-platform ingestion layers using streaming ETLs and Change Data Capture.

Implement pipelines and scheduling infrastructures.

Ensure compliance, data-quality monitoring, and data governance on Lemonades data platform.

Implement large-scale batch and streaming pipelines with data processing frameworks.

Collaborate with other Data Engineers, Developers, BI Engineers, ML Engineers, Data Scientists, Analysts and Product managers.

Share knowledge with other team members and promote engineering standards.
Requirements:
5+ years of prior experience as a data engineer or data infra engineer.

B.S. in Computer Science or equivalent field of study.

Knowledge of databases (SQL, NoSQL).

Proven success in building large-scale data infrastructures such as Change Data Capture, and leveraging open source solutions such as Airflow & DBT, building large-scale streaming pipelines, and building customer data platforms.

Experience with Python, Pulumi\Terraform, Apache Spark, Snowflake, AWS, K8s, Kafka.

Ability to work in an office environment a minimum of 3 days a week.

Enthusiasm about learning and adapting to the exciting world of AI a commitment to exploring this field is a fundamental part of our culture.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8354756
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 3 שעות
Location: Tel Aviv-Yafo
Job Type: Full Time
The role consists of managing varied data solutions, databases, data processing tools, microservices, CI/CD pipelines, and cloud services on AWS cloud. You will be responsible for designing and maintaining these platforms, and get first hand experience with challenges around data, scale, architecture, creative solutions and high quality deliveries in a wide welcoming team.

Key Responsibilities:
Carry out DB administrative tasks (performance tuning, cost efficiency, observability etc..).
Design, build and enhance micro-services (Python / Java).
Promote data driven culture with CI/CD pipelines, serverless components and more.
Create and manage resources with IaC methodology (Infra as code) .
Support production systems (24/7).
Working with Data processing pipelines with external tools (e.g: AirFlow).
Requirements:
Requirements:
5 years of experience with DB intricacies (roles, transactions, DB objects, distributed filesystems).
3 years of programming experience (extra points for : Python / Java).
Experience with containers (Kubernetes, Images, Docker).
Seasoned team player, great communication skills, technical leading experience (Projects, Initiatives, Mentorship).


Advantages:
DBA, MySQL, PostgreSQL, Oracle, Kafka, SingleStore, Snowflake, Aurora, Redis.
REST API, Flask, DropWizrard, Spring, Invoke, Gradle, remote envs., testing frameworks.
Jenkins, GitHub actions, Docker, CI/CD tools, Hu-bot, Artifacts, Ansible, Pulumi, Terraform.
AirFlow, Step functions, Lambdas, Data Catalog, DataDog.
Cloud services (AWS extra points).
Working with AI tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8378014
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
Required Analytics Engineer
Tel Aviv
Want to shape how data drives product decisions?
As an Analytics Engineer, youll design the foundations of our data infrastructure, build robust and scalable data models, and empower teams across the organization with actionable insights that fuel our product direction.
As an Analytics Engineer, you will:
Design and implement robust data models to transform raw data into analytics-ready tables, enabling confident decision-making across product and business teams.
Own and maintain our dbt pipelines with a focus on scalability, modularity, and clear documentation.
Continuously evolve our data models to reflect changing business logic and product needs.
Build and maintain comprehensive testing infrastructure to ensure data accuracy and trust.
Monitor the health of our data pipelines, ensuring integrity in event streams and leading resolution of data issues.
Collaborate closely with analysts, data engineers, and product managers to align data architecture with business goals.
Guide the analytics code development process using Git and engineering best practices.
Create dashboards and reports in Tableau that turn insights into action.
Drive performance and cost optimization across our data stack, proactively improving scalability and reliability.
Requirements:
You should apply if you are:
A data professional with 4+ years of experience in analytics engineering, BI development, or similar data roles.
Highly skilled in SQL, with hands-on experience using Snowflake or similar cloud data warehouses.
Proficient in DBT for data transformation, modeling, and documentation.
Experienced with Tableau or similar BI tools for data visualization.
Familiar with CI/CD for data workflows, version control systems (e.g., Git), and testing frameworks.
A strong communicator who can collaborate effectively with both technical and non-technical stakeholders.
Holding a B.Sc. in Industrial Engineering, Computer Science, or a related technical field.
Passionate about translating complex data into clear, scalable insights that drive product innovation.
Bonus points if you have:
Experience with event instrumentation and user behavioral data.
Scripting ability in Python for automation and data processing.
Familiarity with modern data stack tools such as Airflow, Fivetran, Looker, or Segment.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8361241
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.

Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.



Key Job Responsibilities and Duties:

Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.

Dealing with massive textual sources to train GenAI foundation models.

Solving issues with data and data pipelines, prioritizing based on customer impact.

End-to-end ownership of data quality in our core datasets and data pipelines.

Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.

Providing tools that improve Data Quality company-wide, specifically for ML scientists.

Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.

Acting as an intermediary for problems, with both technical and non-technical audiences.

Promote and drive impactful and innovative engineering solutions

Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation

Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.

Req ID: 20718
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.

Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.

You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.

You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)

Strong programming skills in languages such as Python and Java.

Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.

Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.

Experience with Data Warehousing and ETL/ELT pipelines

Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.

Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.

Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.

Experience of working on products that impact a large customer base - an advantage.

Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8350838
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
10/09/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
The opportunity
Join our dynamic Data & ML Engineering team in iAds and play a pivotal role in driving data solutions that empower data science, finance, analytics, and R&D teams. As an Experienced Data Engineer, you'll work with cutting-edge technologies to design scalable pipelines, ensure data quality, and process billions of data points into actionable insights.
Success Indicators:
In the short term, success means delivering reliable, high-performance data pipelines and ensuring data quality across the product. Long-term, you'll be instrumental in optimizing workflows, enabling self-serve analytics platforms, and supporting strategic decisions through impactful data solutions.
Impact:
Your work will directly fuel business decisions, improve data accessibility and reliability, and contribute to the team's ability to handle massive-scale data challenges. You'll help shape the future of data engineering within a global, fast-paced environment.
Benefits and Opportunities
You'll collaborate with talented, passionate teammates, work on exciting projects with cutting-edge technologies, and have opportunities for professional growth. Competitive compensation, comprehensive benefits, and an inclusive culture make this role a chance to thrive and make a global impact.
What you'll be doing
Designing and developing scalable data pipelines and ETL processes to process massive amounts of structured and unstructured data.
Collaborating with cross-functional teams (data science, finance, analytics, and R&D) to deliver actionable data solutions tailored to their needs.
Building and maintaining tools and frameworks to monitor and improve data quality across the product.
Providing tools and insights that empower product teams with real-time analytics and data-driven decision-making capabilities.
Optimizing data workflows and architectures for performance, scalability, and cost efficiency using cutting-edge technologies like Apache Spark and Flink.
Requirements:
4+ yeasrs of experience as a Data Engineer
Expertise in designing and developing scalable data pipelines, ETL processes, and data architectures.
Proficiency in Python and SQL, with hands-on experience in big data technologies like Apache Spark and Hadoop.
Advanced knowledge of cloud platforms (AWS, Azure, or GCP) and their associated data services.
Experience working with Imply and Apache Druid for real-time analytics and query optimization.
Strong analytical skills and ability to quickly learn and adapt to new technologies and tools.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8341692
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Data Engineer, Product Analytics
As a Data Engineer, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining us, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights visually in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
Influence product and cross-functional teams to identify data opportunities to drive impact.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
7+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
7+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.).
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8352021
סגור
שירות זה פתוח ללקוחות VIP בלבד