דרושים » דאטה » Data Engineer For Data Group

משרות על המפה
 
בדיקת קורות חיים
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
כל החברות >
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer to design and implement high-scale, data-intensive platforms, research and develop algorithmic solutions, and collaborate on key company initiatives. You will play a critical role within core data teams, which are responsible for managing and optimizing fundamental data assets.
What will you be responsible for?
Solve Complex Business Problems with Scalable Data Solutions
Develop and implement robust, high-scale data pipelines to power core assets.
Leverage cutting-edge technologies to tackle complex data challenges and enhance business operations.
Collaborate with Business Stakeholders to Drive Impact
Work closely with Product, Data Science, and Analytics teams to define priorities and develop solutions that directly enhance core products and user experience.
Build and Maintain a Scalable Data Infrastructure
Design and implement scalable, high-performance data infrastructure to support machine learning, analytics, and real-time data processing.
Continuously monitor and optimize data pipelines to ensure reliability, accuracy, and efficiency.
Requirements:
3+ years of hands-on experience designing and implementing large-scale, server-side data solutions
4+ years of programming experience, preferably in Python and SQL, with a strong understanding of data structures and algorithms
Proven experience in building algorithmic solutions, data mining, and applying analytical methodologies to optimize data processing and insights
Proficiency with orchestration tools such as Airflow, Kubernetes, and Docker Swarm, ensuring seamless workflow automation
Experience working with Data Lakes and Apache Spark for processing large-scale datasets strong advantage
Familiarity with AWS services (S3, Glue, EMR, Redshift) nice to have
Knowledge of tools such as Kafka, Databricks, and Jenkins a plus
Strong command of a variety of storage engines, including Relational (PostgreSQL, MySQL), Document-based (MongoDB), Time-series / Search (ClickHouse, Elasticsearch), Key-value (Redis)
Comfortable working with AI tools and staying ahead of emerging technologies and trends
This position is open to all candidates.
 
Hide
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280797
סגור
שירות זה פתוח ללקוחות VIP בלבד
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Engineer for the Insight Team to join Lushas Data Group and a new team responsible for developing innovative features based on multiple layers of data. These features will power recommendation systems, insights, and more. This role involves close collaboration with the core teams within the Data Group, working on diverse data pipelines that tackle challenges related to scale and algorithmic optimization, all aimed at enhancing the data experience for Lushas customers.
What will you be responsible for?
Develop and implement robust, scalable data pipelines and integration solutions within Lushas Databricks-based environment.
Develop models and implement algorithms, with a strong emphasis on delivering high-quality results.
Leverage technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
Design innovative data solutions that support millions of data points, ensuring high performance and reliability.
Requirements:
3+ years of hands-on experience in data engineering, including building and optimizing scalable data pipelines
5+ years of experience as a software developer, preferably in Python
Strong algorithmic background, including: Development and optimization of machine learning models, Implementation of advanced data algorithms
Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or equivalent platforms (Azure, GCP)
Expertise in extracting, ingesting, and transforming large-scale datasets in an efficient and reliable manner
Deep knowledge of big data platforms such as: Apache Spark, Databricks, Elasticsearch, and Kafka particularly for real-time data streaming and processing
(Nice-to-have) Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.
AI-savvy: comfortable working with AI tools and staying ahead of emerging trends.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280793
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: More than one
We're seeking an outstanding and passionate Data Platform Engineer to join our growing R&D team.
You will work in an energetic startup environment following Agile concepts and methodologies. Joining the company at this unique and exciting stage in our growth journey creates an exceptional opportunity to take part in shaping Finaloop's data infrastructure at the forefront of Fintech and AI.
What you'll do:
Design, build, and maintain scalable data pipelines and ETL processes for our financial data platform.
Develop and optimize data infrastructure to support real-time analytics and reporting.
Implement data governance, security, and privacy controls to ensure data quality and compliance.
Create and maintain documentation for data platforms and processes
Collaborate with data scientists and analysts to deliver actionable insights to our customers.
Troubleshoot and resolve data infrastructure issues efficiently
Monitor system performance and implement optimizations
Stay current with emerging technologies and implement innovative solutions
Tech stack: AWS Serverless, Python, Airflow, Airbyte, Temporal, PostgreSQL, Snowflake, Kubernetes, Terraform, Docker.
Requirements:
3+ years experience in data engineering or platform engineering roles
Strong programming skills in Python and SQL
Experience with orchestration platforms like Airflow/Dagster/Temporal
Experience with MPPs like Snowflake/Redshift/Databricks
Hands-on experience with cloud platforms (AWS) and their data services
Understanding of data modeling, data warehousing, and data lake concepts
Ability to optimize data infrastructure for performance and reliability
Experience working with containerization (Docker) in Kubernetes environments.
Familiarity with CI/CD concepts
Fluent in English, both written and verbal
And it would be great if you have (optional):
Experience with big data processing frameworks (Apache Spark, Hadoop)
Experience with stream processing technologies (Flink, Kafka, Kinesis)
Knowledge of infrastructure as code (Terraform)
Experience building analytics platforms
Experience building clickstream pipelines
Familiarity with machine learning workflows and MLOps
Experience working in a startup environment or fintech industry
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8232260
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
01/07/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are making the future of Mobility come to life starting today.
At our company we support the worlds largest vehicle fleet operators and transportation providers to optimize existing operations and seamlessly launch new, dynamic business models - driving efficient operations and maximizing utilization.
At the heart of our platform lies the data infrastructure, driving advanced machine learning models and optimization algorithms. As the owner of data pipelines, you'll tackle diverse challenges spanning optimization, prediction, modeling, inference, transportation, and mapping.
As a Senior Data Engineer, you will play a key role in owning and scaling the backend data infrastructure that powers our platformsupporting real-time optimization, advanced analytics, and machine learning applications.
What You'll Do
Design, implement, and maintain robust, scalable data pipelines for batch and real-time processing using Spark, and other modern tools.
Own the backend data infrastructure, including ingestion, transformation, validation, and orchestration of large-scale datasets.
Leverage Google Cloud Platform (GCP) services to architect and operate scalable, secure, and cost-effective data solutions across the pipeline lifecycle.
Develop and optimize ETL/ELT workflows across multiple environments to support internal applications, analytics, and machine learning workflows.
Build and maintain data marts and data models with a focus on performance, data quality, and long-term maintainability.
Collaborate with cross-functional teams including development teams, product managers, and external stakeholders to understand and translate data requirements into scalable solutions.
Help drive architectural decisions around distributed data processing, pipeline reliability, and scalability.
Requirements:
4+ years in backend data engineering or infrastructure-focused software development.
Proficient in Python, with experience building production-grade data services.
Solid understanding of SQL
Proven track record designing and operating scalable, low-latency data pipelines (batch and streaming).
Experience building and maintaining data platforms, including lakes, pipelines, and developer tooling.
Familiar with orchestration tools like Airflow, and modern CI/CD practices.
Comfortable working in cloud-native environments (AWS, GCP), including containerization (e.g., Docker, Kubernetes).
Bonus: Experience working with GCP
Bonus: Experience with data quality monitoring and alerting
Bonus: Strong hands-on experience with Spark for distributed data processing at scale.
Degree in Computer Science, Engineering, or related field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8238970
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Backend Engineer.
What will you be responsible for?
Design and build distributed data systems that are the backbone of our product innovation.
Architect and implement high-throughput data pipelines capable of handling billions of records with speed and reliability.
Develop custom algorithms for deduplication, data merging, and real-time data updates.
Optimize storage, indexing, and retrieval strategies to manage massive datasets efficiently.
Solve deep engineering challenges in distributed computing environments like Spark, EMR, and Databricks.
Build fault-tolerant, highly available data infrastructure with integrated monitoring and observability.
Partner closely with ML engineers, backend developers, and product managers to turn business needs into scalable, production-grade features.
Requirements:
4+ years of hands-on experience in backend or data engineering, with a proven track record of building production-grade systems
Expertise in Python (or Java/Scala) with a deep understanding of data structures, algorithms, and performance trade-offs
Demonstrated experience designing and optimizing large-scale distributed data pipelines using technologies like Apache Spark, EMR, Databricks, Airflow, or Kubernetes
Strong command of a variety of storage engines, including Relational (PostgreSQL, MySQL), Document-based (MongoDB), Time-series / Search (ClickHouse, Elasticsearch), Key-value (Redis)
Familiarity with workflow orchestration tools such as Airflow, Dagster, or Prefect
Hands-on experience with message brokers like Kafka or RabbitMQ, and building event-driven systems
Solid foundation in software engineering best practices, including: CI/CD processes, Automated testing, Monitoring, Scalable system design
Experience in building and launching end-to-end data products that are core to business operations
Comfortable experimenting with AI tools and large language models (LLMs) for automation and data enrichment
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280800
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
14/07/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
At our company, were reinventing DevOps and MLOps to help the worlds greatest companies innovate -- and we want you along for the ride. This is a special place with a unique combination of brilliance, spirit and just all-around great people. Here, if youre willing to do more, your career can take off. And since software plays a central role in everyones lives, youll be part of an important mission. Thousands of customers, including the majority of the Fortune 100, trust our company to manage, accelerate, and secure their software delivery from code to production - a concept we call liquid software. Wouldn't it be amazing if you could join us in our journey?
About the Team
We are seeking a highly skilled Senior Data Engineer to join our company's ML Data Group and help drive the development and optimization of our cutting-edge data infrastructure. As a key member of the company's ML Platform team, you will play an instrumental role in building and evolving our feature store data pipeline, enabling machine learning teams to efficiently access and work with high-quality, real-time data at scale.
In this dynamic, fast-paced environment, you will collaborate with other data professionals to create robust, scalable data solutions. You will be responsible for architecting, designing, and implementing data pipelines that ensure reliable data ingestion, transformation, and storage, ultimately supporting the production of high-performance ML models.
We are looking for data-driven problem-solvers who thrive in ambiguous, fast-moving environments and are passionate about building data systems that empower teams to innovate and scale. We value independent thinkers with a strong sense of ownership, who can take challenges from concept to production while continuously improving our data infrastructure.
As a Data Engineer at our company's ML you will...
Design and implement large-scale batch & streaming data pipelines infrastructure
Build and optimize data workflows for maximum reliability and performance
Develop solutions for real-time data processing and analytics
Implement data consistency checks and quality assurance processes
Design and maintain state management systems for distributed data processing
Take a crucial role in building the group's engineering culture, tools, and methodologies
Define abstractions, methodologies, and coding standards for the entire Data Engineering pipeline.
Requirements:
5+ years of experience as a Software Engineer with focus on data engineering
Expert knowledge in building and maintaining data pipelines at scale
Strong experience with stream/batch processing frameworks (e.g. Apache Spark, Flink)
Profound understanding of message brokers (e.g. Kafka, RabbitMQ)
Experience with data warehousing and lake technologies
Strong Python programming skills and experience building data engineering tools
Experience with designing and maintaining Python SDKs
Proficiency in Java for data processing applications
Understanding of data modeling and optimization techniques
Bonus Points
Experience with ML model deployment and maintenance in production
Knowledge of data governance and compliance requirements
Experience with real-time analytics and processing
Understanding of distributed systems and cloud architectures
Experience with data visualization and lineage tools/frameworks and techniques.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8257535
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
29/06/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We seek a Director of Data to join us and lead our data group.
As our Director of Data, you will be a key member of our R&D leadership team. You will be responsible for developing and executing a data strategy that aligns with our business goals, overseeing data management, analytics, and validation, and ensuring data integrity at every stage of product development and production.
A day in the life and how youll make an impact:
Define and execute a strategic data roadmap aligned with business objectives, fostering a data-driven culture and leading a high-performing team of data engineers, scientists, and analysts.
Establish robust data validation frameworks, ensuring product integrity and accuracy through all stages, from data acquisition to end-user delivery.
Build and optimize scalable data infrastructure and pipelines to support our data needs and ensure data security, compliance, and accessibility.
Collaborate with product and engineering teams to create and launch data-driven products, ensuring they are built on reliable data and designed to meet customer needs.
Guide the team in generating actionable insights to drive business decisions and product innovation in areas such as personalization, marketing, and customer success.
Implement data governance policies and maintain compliance with industry regulations and best practices.
Requirements:
10+ years of experience in data-related roles, with at least 5 years in a leadership position (ideally within a tech or AI-driven startup environment).
M.Sc. or PhD in Data Science/Computer Science/Engineering/Statistics, or a related field.
Extensive experience with cloud platforms (AWS, GCP, or Azure) and modern data warehouses (Snowflake, BigQuery, or Redshift).
Proficiency in data technologies, such as SQL, Python, R, Looker and big data tools (e.g., Hadoop, Spark).
Proven experience in leveraging data for product development, business intelligence, and operational optimization.
Strong track record of building and managing cross-functional data teams and influencing across all levels of an organization.
Excellent communication skills, with the ability to convey complex data insights in an accessible manner to non-technical stakeholders.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8234801
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Data Engineering Tech Lead.
What will you be responsible for?
Lead the design and development of scalable, high-performance data workflows, including both batch pipelines and real-time data products.
Define, implement, and enforce engineering best practices related to code quality, testing, CI/CD pipelines, observability, and documentation.
Mentor, support, and grow a team of data engineers, fostering a collaborative and high-performance engineering culture.
Identify opportunities to create new data assets and features that expand product capabilities and value proposition.
Drive architectural decision-making in areas of data modeling, storage solutions, and compute resources within cloud environments such as Databricks and Snowflake.
Collaborate closely with cross-functional stakeholdersincluding Product, DevOps, and R&Dto ensure effective delivery and platform stability.
Promote and champion a data-driven mindset across the organization, balancing technical rigor with business context and strategic goals.
Requirements:
Minimum 5 years of hands-on experience designing, building, and maintaining large-scale data pipelines for both batch processing and streaming use cases.
Deep expertise in Python and SQL, with a focus on writing clean, performant, and maintainable code.
Strong analytical and problem-solving skills, with the ability to break down complex technical challenges and align solutions to business objectives.
Solid background in data modeling, analytics, and designing architectures for scalability, performance, and cost efficiency.
Practical experience working with modern OLAP systems and cloud data platforms, including Databricks, Snowflake, or BigQuery.
Familiarity with AI agent protocols (such as A2A, MCP) and LLM-related technologies (e.g., vector databases, embeddings) is a plus.
AI-savvy, with comfort adopting AI tools and staying current with emerging AI trends and technologies.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280795
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Were looking for a Data Analyst for the Data Group to work alongside top-tier data professionals, data scientists, and data engineering teams. You will be an integral part of the group ecosystem, providing insights that impact hundreds of thousands of users by leveraging advanced data systems.
What will you be responsible for?
Deeply understand the architecture of Data Group, driving the development of advanced analyses and data models, while collaborating with cross-functional teams to design and implement enrichment logic within our data ecosystem.
Develop tools, build dashboards, and establish methodologies to analyze user behavior and enhance the data experience for users.
Leverage Data Lake to its fullest potential, utilizing cutting-edge data exploration and visualization tools to extract insights and drive data-driven decision-making.
Requirements:
3+ years of experience as a Data Analyst, preferably in a data-driven tech company with strong product orientation
Proven experience collaborating closely with Data Engineering teams
Proficiency in either Python or SQL required
Advantage for experience with Python libraries such as Pandas, NumPy, or PySpark
Alternatively, strong command of SQL, including analytical functions and performance optimization
Hands-on experience with big data technologies in cloud-based environments such as Databricks, Snowflake, or Redshift preferred
Strong analytical thinking with the ability to turn complex data into actionable insights that impact product and business decisions
(Nice to have) Experience developing or optimizing statistical or machine learning models as part of data pipelines
AI-savvy comfortable using AI tools and staying current with emerging trends
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8280801
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
13/07/2025
Location: Tel Aviv-Yafo and Netanya
Job Type: Full Time
As a Big Data & GenAI Engineering Lead within our company's Data & AI Department, you will play a pivotal role in building the data and AI backbone that empowers product innovation and intelligent business decisions. You will lead the design and implementation of our companys next-generation lakehouse architecture, real-time data infrastructure, and GenAI-enriched solutions, helping drive automation, insights, and personalization at scale. In this role, you will architect and optimize our modern data platform while also integrating and operationalizing Generative AI models to support go-to-market use cases. This includes embedding LLMs and vector search into core data workflows, establishing secure and scalable RAG pipelines, and partnering cross-functionally to deliver impactful AI applications.
As a Big Data & GenAI Engineering Lead in our company you will...
Design, lead, and evolve our companys petabyte-scale Lakehouse and modern data platform to meet performance, scalability, privacy, and extensibility goals.
Architect and implement GenAI-powered data solutions, including retrieval-augmented generation (RAG), semantic search, and LLM orchestration frameworks tailored to business and developer use cases.
Partner with product, engineering, and business stakeholders to identify and develop AI-first use cases, such as intelligent assistants, code insights, anomaly detection, and generative reporting.
Integrate open-source and commercial LLMs securely into data products using frameworks such as LangChain, or similar, to augment AI capabilities into data products.
Collaborate closely with engineering teams to drive instrumentation, telemetry capture, and high-quality data pipelines that feed both analytics and GenAI applications.
Provide technical leadership and mentorship to a cross-functional team of data and ML engineers, ensuring adherence to best practices in data and AI engineering.
Lead tool evaluation, architectural PoCs, and decisions on foundational AI/ML tooling (e.g., vector databases, feature stores, orchestration platforms).
Foster platform adoption through enablement resources, shared assets, and developer-facing APIs and SDKs for accessing GenAI capabilities.
Requirements:
8+ years of experience in data engineering, software engineering, or MLOps, with hands-on leadership in designing modern data platforms and distributed systems.
Proven experience implementing GenAI applications or infrastructure (e.g., building RAG pipelines, vector search, or custom LLM integrations).
Deep understanding of big data technologies (Kafka, Spark, Iceberg, Presto, Airflow) and cloud-native data stacks (e.g., AWS, GCP, or Azure).
Proficiency in Python and experience with GenAI frameworks like LangChain, LlamaIndex, or similar.
Familiarity with modern ML toolchains and model lifecycle management (e.g., MLflow, SageMaker, Vertex AI).
Experience deploying scalable and secure AI solutions with proper attention to privacy, hallucination risk, cost management, and model drift.
Ability to operate in ambiguity, lead complex projects across functions, and translate abstract goals into deliverable solutions.
Excellent communication and collaboration skills, with a passion for pushing boundaries in both data and AI domains.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8255562
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
20/07/2025
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Senior Data Engineer
The opportunity
Join our dynamic Data & ML Engineering team in iAds and play a pivotal role in driving data solutions that empower data science, finance, analytics, and R&D teams. As an Experienced Data Engineer, you'll work with cutting-edge technologies to design scalable pipelines, ensure data quality, and process billions of data points into actionable insights.
Success Indicators
In the short term, success means delivering reliable, high-performance data pipelines and ensuring data quality across the product. Long-term, you'll be instrumental in optimizing workflows, enabling self-serve analytics platforms, and supporting strategic decisions through impactful data solutions.
Impact
Your work will directly fuel business decisions, improve data accessibility and reliability, and contribute to the team's ability to handle massive-scale data challenges. You'll help shape the future of data engineering within a global, fast-paced environment.
Benefits and Opportunities
You'll collaborate with talented, passionate teammates, work on exciting projects with cutting-edge technologies, and have opportunities for professional growth. Competitive compensation, comprehensive benefits, and an inclusive culture make this role a chance to thrive and make a global impact.
What you'll be doing
Designing and developing scalable data pipelines and ETL processes to process massive amounts of structured and unstructured data.
Collaborating with cross-functional teams (data science, finance, analytics, and R&D) to deliver actionable data solutions tailored to their needs.
Building and maintaining tools and frameworks to monitor and improve data quality across the product.
Providing tools and insights that empower product teams with real-time analytics and data-driven decision-making capabilities.
Optimizing data workflows and architectures for performance, scalability, and cost efficiency using cutting-edge technologies like Apache Spark and Flink.
Requirements:
4+ yeasrs of experience as a Data Engineer
Expertise in designing and developing scalable data pipelines, ETL processes, and data architectures.
Proficiency in Python and SQL, with hands-on experience in big data technologies like Apache Spark and Hadoop.
Advanced knowledge of cloud platforms (AWS, Azure, or GCP) and their associated data services.
Experience working with Imply and Apache Druid for real-time analytics and query optimization.
Strong analytical skills and ability to quickly learn and adapt to new technologies and tools.
You might also have
Hands-on experience with stream-processing frameworks like Apache Flink and Kafka for real-time data integration and analytics.
Knowledge of functional programming concepts, particularly using Scala.
Familiarity with data visualization tools like Tableau or Power BI for creating impactful dashboards.
Experience with machine learning frameworks or building ML pipelines and MLOps workflows.
Previous exposure to ad-tech data solutions or working within ad-serving ecosystems.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
8266210
סגור
שירות זה פתוח ללקוחות VIP בלבד