דרושים » דאטה » Data Engineer

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 8 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were seeking our first Data Engineer to join the Revenue Operations team. This is a high-impact role where youll build the foundations of our data infrastructure - connecting the dots between systems, designing and maintaining our data warehouse, and creating reliable pipelines that bring together all revenue-related data. Youll work directly with the Director of Revenue Operations and partner closely with Sales, Finance, and Customer Success.
This is a chance to shape the role from the ground up and create a scalable data backbone that powers smarter decisions across the company.
Role Overview:
As the Data Engineer, you will own the design, implementation, and evolution of our data infrastructure. Youll connect core business systems (CRM, finance platforms, billing systems,) into a central warehouse, ensure data quality, and make insights accessible to leadership and revenue teams. Your success will be measured by the accuracy, reliability, and usability of the data foundation you build.
Key Responsibilities:
Data Infrastructure & Warehousing:
Design, build, and maintain a scalable data warehouse for revenue-related data.
Build ETL/ELT pipelines that integrate data from HubSpot, Netsuite, billing platforms, ACP, and other business tools.
Develop a clear data schema and documentation that can scale as we grow.
Cross-Functional Collaboration:
Work closely with Sales, Finance, and Customer Success to understand their reporting and forecasting needs.
Translate business requirements into data models that support dashboards, forecasting, and customer health metrics.
Act as the go-to partner for data-related questions across revenue teams.
Scalability & Optimization:
Continuously monitor and optimize pipeline performance and warehouse scalability.
Ensure the infrastructure can handle increased data volume and complexity as the company grows.
Establish and enforce best practices for data quality, accuracy, and security.
Evaluate and implement new tools, frameworks, or architectures that improve automation, speed, and reliability.
Build reusable data models and modular pipelines to shorten development time and reduce maintenance.
Requirements:
4-6 years of experience as a Data Engineer or in a similar role (preferably in SaaS, Fintech, or fast-growing B2B companies).
Strong expertise in SQL and data modeling; comfort working with large datasets.
Hands-on experience building and maintaining ETL/ELT pipelines (using tools such as Fivetran, dbt, Airflow, or similar).
Experience designing and managing cloud-based data warehouses (Snowflake, BigQuery, Redshift, or similar).
Familiarity with CRM (HubSpot), ERP/finance systems (Netsuite), and billing platforms.
Strong understanding of revenue operations metrics (ARR, MRR, churn, LTV, CAC, etc.).
Ability to translate messy business requirements into clean, reliable data structures.
Solid communication skills - able to explain technical concepts to non-technical stakeholders.
What Sets You Apart:
Youve been the first data hire before and know how to build from scratch (not a must).
Strong business acumen with a focus on revenue operations.
A builder mindset: you like solving messy data problems and making systems talk.
Comfortable working across teams and translating business needs into data solutions.
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481826
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for an experienced BI Data Engineer to join our Data team within the Information Systems group.
In this role, you will be responsible for building and maintaining scalable, high-quality data pipelines, models, and infrastructure that support business operations across the entire company, with a primary focus on GTM domains.
You will take ownership of core data architecture components, ensuring data consistency, reliability, and accessibility across all analytical and operational use cases.
Your work will include designing data models, orchestrating transformations, developing internal data applications, and ensuring that business processes are accurately represented in the data.
This role requires a combination of deep technical expertise and strong understanding of business operations.
You will collaborate closely with analysts, domain experts, and engineering teams to translate complex business processes into robust, scalable data solutions. If you are passionate about data architecture, building end-to-end data systems, and solving complex engineering challenges that directly impact the business wed love to meet you!
Key Responsibilities:
Design, develop, and maintain end-to-end data pipelines, ensuring scalability, reliability, and performance.
Build, optimize, and evolve core data models and semantic layers that serve as the organizations single source of truth.
Implement robust ETL/ELT workflows using Snowflake, dbt, Rivery, and Python.
Develop internal data applications and automation tools to support advanced analytics and operational needs.
Ensure high data quality through monitoring, validation frameworks, and governance best practices.
Improve and standardize data modeling practices, naming conventions, and architectural guidelines.
Continuously evaluate and adopt new technologies, features, and tooling across the data engineering stack.
Collaborate with cross-functional stakeholders to deeply understand business processes and translate them into scalable technical solutions.
Requirements:
5+ years of experience in BI data engineering, data engineering, or a similar data development role.
Bachelors degree in Industrial Engineering, Statistics, Mathematics, Economics, Computer Science, or a related field required.
Strong SQL expertise and extensive hands-on experience with ETL/ELT development required.
Proficiency with Snowflake, dbt, Python, and modern data engineering workflows essential.
Experience building and maintaining production-grade data pipelines using orchestration tools (e.g., Rivery, Airflow, Prefect) an advantage.
Experience with cloud platforms, CI/CD, or DevOps practices for data an advantage.
Skills and Attributes:
Strong understanding of business processes and the ability to design data solutions that accurately represent real-world workflows.
Strong analytical and problem-solving skills, with attention to engineering quality and performance.
Ability to manage and prioritize tasks in a fast-paced environment.
Excellent communication skills in Hebrew and English.
Ownership mindset, curiosity, and a passion for building high-quality data systems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8441718
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 6 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8430196
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Senior Data Engineer to join our Data team - someone whos passionate about building reliable, scalable data infrastructure and thrives on solving complex technical challenges.
In this role, youll own the design and development of end-to-end data pipelines that power analytics and data-driven decision-making.
Youll collaborate closely with both business and technical stakeholders to ensure data flows smoothly, accurately, and efficiently across the company.
What You Will Do:
Design, implement, and maintain large-scale ETL and ELT pipelines using modern data frameworks and cloud technologies.
Work with Redshift data warehouses to design efficient schemas and optimize performance.
Build and manage data ingestion processes from multiple sources - APIs, SaaS platforms, internal systems, and databases.
Collaborate with stakeholders to deliver clean, well-modeled, and high-quality data.
Build and evolve a modern, efficient, and scalable data warehouse architecture.
Ensure observability, monitoring, and testing across all data processes.
Apply best practices in CI/CD, version control (Git), and data quality validation.
Requirements:
5+ years of experience as a Data Engineer or ETL Developer, building large-scale data pipelines in a cloud environment (AWS, GCP, or Azure).
Strong SQL expertise, including query optimization and data modeling.
Hands-on experience with ETL/ELT tools such as Matillion, Rivery, SSIS, Talend, or similar.
Solid understanding of data warehouse concepts and dimensional modeling.
Excellent analytical and problem-solving skills.
A collaborative mindset and the ability to work cross-functionally with internal teams.
A self-starter and agile learner who thrives in a fast-paced, dynamic environment.
AI/Data-related development capabilities experience building or integrating AI-driven data solutions is a plus.
Nice to Have:
Experience with Redshift and Matillion - big advantage.
Experience with BI tools such as Qlik or Power BI - big advantage.
Familiarity with CI/CD pipelines.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8435478
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data & Analytics Engineer to join our Customer Performance team and help shape the foundation of our data platform.
You will own the data layer, evolve our BI semantic layer on a modern BI platform, and enable teams across the company to access and trust their data. Working closely with analysts and stakeholders in Operations, Product, R&D, and Customer Success, youll turn complex data into insights that drive performance and customer value.
A day in the life
Build and expand our analytics data stack
Design and maintain curated data models that power self-service analytics and dashboards
Develop and own the BI semantic layer on a modern BI platform
Collaborate with analysts to define core metrics, KPIs, and shared business logic
Partner with Operations, R&D, Product, and Customer Success teams to translate business questions into scalable data solutions
Ensure data quality, observability, and documentation across datasets and pipelines
Support complex investigations and ad-hoc analyses that drive customer and operational excellence.
Requirements:
4+ years of experience as a Data Engineer, Analytics Engineer, or similar hands-on data role
Strong command of SQL and proficiency in Python for data modeling and transformation
Experience with modern data tools such as dbt, BigQuery, and Airflow (or similar)
Proven ability to design clean, scalable, and analytics-ready data models
Familiarity with BI modeling and metric standardization concepts
Experience partnering with analysts and stakeholders to deliver practical data solutions
A pragmatic, problem-solving mindset and ownership of data quality and reliability
Excellent communication skills and ability to connect technical work with business impact
Nice to have
Experience implementing or managing BI tools such as Tableau, Looker, or Hex
Understanding of retail, computer vision, or hardware data environments
Exposure to time-series data, anomaly detection, or performance monitoring
Interest in shaping a growing data organization and influencing its direction.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8444133
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer, youll collaborate with top-notch engineers and data scientists to elevate our platform to the next level and deliver exceptional user experiences. Your primary focus will be on the data engineering aspectsensuring the seamless flow of high-quality, relevant data to train and optimize content models, including GenAI foundation models, supervised fine-tuning, and more.
Youll work closely with teams across the company to ensure the availability of high-quality data from ML platforms, powering decisions across all departments. With access to petabytes of data through MySQL, Snowflake, Cassandra, S3, and other platforms, your challenge will be to ensure that this data is applied even more effectively to support business decisions, train and monitor ML models and improve our products.
Key Job Responsibilities and Duties:
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Dealing with massive textual sources to train GenAI foundation models.
Solving issues with data and data pipelines, prioritizing based on customer impact.
End-to-end ownership of data quality in our core datasets and data pipelines.
Experimenting with new tools and technologies to meet business requirements regarding performance, scaling, and data quality.
Providing tools that improve Data Quality company-wide, specifically for ML scientists.
Providing self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise.
Acting as an intermediary for problems, with both technical and non-technical audiences.
Promote and drive impactful and innovative engineering solutions
Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects, hackathons, conferences, and active community participation
Collaborate with multidisciplinary teams: Collaborate with product managers, data scientists, and analysts to understand business requirements and translate them into machine learning solutions. Provide technical guidance and mentorship to junior team members.
Requirements:
Bachelors or masters degree in computer science, Engineering, Statistics, or a related field.
Minimum of 3 years of experience as a Data Engineer or a similar role, with a consistent record of successfully delivering ML/Data solutions.
You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; ‌ you have hands-on experience with schema design and data modeling and working with ML scientists and ML engineers to provide production level ML solutions.
You have experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc)
Strong programming skills in languages such as Python and Java.
Experience with big data processing frameworks such, Pyspark, Apache Flink, Snowflake or similar frameworks.
Demonstrable experience with MySQL, Cassandra, DynamoDB or similar relational/NoSQL database systems.
Experience with Data Warehousing and ETL/ELT pipelines
Experience in data processing for large-scale language models like GPT, BERT, or similar architectures - an advantage.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy, pandas, and matplotlib - an advantage.
Experience with experimental design, A/B testing, and evaluation metrics for ML models - an advantage.
Experience of working on products that impact a large customer base - an advantage.
Excellent communication in English; written and spoken.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8430193
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 7 שעות
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Senior BI Data Engineer, youll join a company where culture isnt a slogan - its our DNA.

Youll be part of a data-driven organization where every voice matters, working at the heart of our BI team to raise the bar for analytical excellence.

In this role, youll take end-to-end ownership of complex, high-impact BI initiatives, shaping how measures success, makes decisions, and scales. Your work will directly impact over 1M users worldwide, empowering B2B sales teams to unlock new revenue opportunities and drive sustainable growth.

What Youll Actually Do:

Lead the design and evolution of BI data foundations, owning key product and GTM metric definitions and data models
Build, scale, and maintain high-quality ELT pipelines and curated datasets (raw → modeled → semantic) that power dashboards and self-serve analytics
Collaborate closely with Data Scientists to operationalize machine learning use cases, including feature pipelines and analytical datasets
Take senior ownership within the BI team by mentoring peers, reviewing critical work, and promoting best practices in analytics engineering and semantic modeling
Own the performance, cost, and reliability of the data warehouse and BI layer by optimizing models, queries, and incremental processing patterns
Translate ambiguous business questions into clear, governed metrics and scalable data models, partnering with Product, RevOps/GTM, Finance, and Analytics
Establish and maintain strong data quality, observability, and monitoring practices (tests, SLAs, anomaly detection)
Drive standardization and automation across the BI development lifecycle, including version control, CI/CD, documentation, and release processes
Stay ahead of modern BI and analytics engineering trends, including AI-assisted development, and apply them pragmatically to increase trust and speed
Requirements:
5+ years of experience in Data Engineering / BI roles, with proven ownership of scalable, end-to-end data ecosystems
Expert-level experience with modern data stacks, including Snowflake or Databricks and dbt as a core transformation layer
Advanced SQL and Python skills, with hands-on responsibility for CI/CD pipelines, data quality frameworks, and observability
Deep understanding of dimensional modeling, data warehousing patterns, and semantic layer design
Hands-on experience with orchestration tools such as Airflow, managing complex and high-availability ELT workflows
Experience working with large-scale data processing, including exposure to Kafka, Spark, or streaming architectures
Strong ability to independently lead cross-functional initiatives and translate business requirements into executable technical solutions
Strong business and analytical mindset, with an AI-forward approach and curiosity for emerging tools and methodologies
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481843
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
07/12/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a versatile, talented, and highly motivated Data Engineer to join our growing team.

If youre passionate about solving complex problems, thrive in dynamic environments, and love working at the intersection of data engineering, machine learning infrastructure, and AI innovation, this role is for you.

As a Data Engineer, youll play a key role in shaping how data flows through the company, from building scalable pipelines and robust infrastructure to powering data science models and enabling internal teams with intelligent GenAI-powered tools. This is a hands-on, high-impact role with plenty of room for ownership, creativity, and growth.

This is a high-impact role where your work will shape how the company leverages data and AI. If you want to build, innovate, and push boundaries in a collaborative and fast-moving environment, wed love to meet you.

Responsibilities
Own the entire data lifecycle from understanding business needs and building reliable pipelines to ensuring data quality, observability, and performance.
Design, build, and scale modern data infrastructure including data lakes, warehouses, and complex ETL/ELT pipelines.
Integrate and consolidate diverse data sources (CRMs, APIs, databases, SaaS platforms) into a single, trusted source of truth.
Implement and manage CI/CD, observability, and infrastructure-as-code in a cloud-native environment.
Work with the data science team on their ML pipelines, giving data scientists the infrastructure and automation they need to deploy models to production with speed and confidence.
Collaborate with cross-functional teams to embed GenAI agents into business processes, creating smart workflows that boost efficiency and reduce manual work.
Develop frameworks and internal tooling that empower other teams to safely adopt AI and accelerate innovation.
Optimize data infrastructure for performance and cost-efficiency, with a focus on BigQuery optimization.
Ensure high data quality and integrity across large-scale ETL processes. Work closely with analysts, data scientists, and product managers to support data modeling, governance, and analytical initiatives.
Requirements:
5+ years of experience as a Data Engineer.
Strong programming skills in Python and SQL, with a focus on clean, maintainable, production-grade code.
Proven experience building data pipelines with Airflow.
Hands-on experience with modern analytical databases
Experience working with cloud platforms.
Solid knowledge of data modeling, database design, and performance optimization.
Strong problem-solving abilities, analytical mindset, and attention to detail.
Experience working in production-grade environments.
Excellent communication and collaboration skills.
Familiarity with modern CI/CD, observability, and infrastructure-as-code practices.
Experience with Kubernetes, Docker, and Terraform.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8446375
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights visually in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
7+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
7+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478326
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
לפני 9 שעות
חברה חסויה
Job Type: Full Time
We use cutting-edge innovations in financial technology to bring leading data and features that allow individuals to be qualified instantly, making purchases at the point-of-sale fast, fair and easy for consumers from all walks of life.
As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.
Responsibilities
Design and build data solutions that support our companys core business goals, from enabling capital market transactions (loan sales and securitization) to providing
reliable insights for reducing the cost of capital.
Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.
Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.
Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.
Own and evolve data lake pipelines, maintenance, schema management, and improvements.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.
Implement new tools and modern development approaches that improve both scalability and business agility.
Ensure adherence to coding best practices and development of reusable code.
Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.
Requirements:
4+ years of experience as a Data Engineer.
4+ years of Python and SQL experience.
4+ years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).
3+ years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.
Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).
Strong troubleshooting and debugging skills in large-scale systems.
Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.
Experience with design patterns, coding best practices, and data modeling.
Proficiency with Git and modern source control.
Basic Linux/Unix system administration skills.
Nice to Have
Familiarity with fintech business processes (funding, securitization, loan servicing, accounting).- Huge advantage
BS/MS in Computer Science or related field.
Experience with NoSQL or large-scale DBs.
DevOps experience in AWS.
Microservices experience.
2+ years of experience in Spark and the broader Data Engineering ecosystem.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8481603
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications. Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.
In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.
You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.
Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.
Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.
Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
Data Engineer, Product Analytics Responsibilities
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Define and manage Service Level Agreements for all data sets in allocated areas of ownership
Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.
Requirements:
Minimum Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
4+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions
4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Preferred Qualifications
Master's or Ph.D degree in a STEM field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8478330
סגור
שירות זה פתוח ללקוחות VIP בלבד