דרושים » תוכנה » Data Engineer, Shops Ads - Buyer Personalization (TLV)

משרות על המפה
 
בדיקת קורות חיים
אבחון און ליין
VIP
הפוך ללקוח VIP
רגע, משהו חסר!
נשאר לך להשלים רק עוד פרט אחד:
 
שירות זה פתוח ללקוחות VIP בלבד
AllJObs VIP
משרה זו סומנה ע"י המעסיק כלא אקטואלית יותר
מיקום המשרה: תל אביב יפו
סוג משרה: משרה מלאה
משרות דומות שיכולות לעניין אותך
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
09/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Jobs without resume
Design, build, and launch data infrastructures that support multiple use cases across different products or domains.
Collaborate with engineers, product managers, data scientists, and other stakeholders to understand data needs.
Solve challenging data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, and sourcing from structured and unstructured data sources.
Lead end-to-end data projects from infrastructure design to production monitoring and visualization.
Requirements:
A great team player with a can-do approach.
4+ years of experience in Data Engineering in Big Data pipelines, on-prem and hybrid cloud environments, SQL and NoSQL databases, and SaaS.
Proficiency with one or more programming languages - Python/Scala/Java/Go.
Proficiency with SQL and experience with ETL and data modeling.
Experience in an agile development methodology, CI/CD, Git/Bitbucket, and TeamCity.
Bachelor's degree in Computer Science or equivalent experience.
Fluent in English, both written and verbal.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7686871
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
07/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a Senior Data Engineer.
As a Senior Data Engineer, you will play a pivotal role in our Data Engineering team, driving a multitude of responsibilities essential to fueling our groundbreaking R&D endeavors. Youll have the autonomy to explore and implement cutting-edge technologies and data solutions, collaborating with diverse team of data scientists, developers, analysts, and stakeholders. Every day will present new learning opportunities and challenges as we strive together to reshape the future of work.
Responsibilities:
Design, develop, and maintain scalable data pipelines and infrastructure to support the collection, processing, and analysis of large volumes of data.
Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders, to understand data requirements and develop solutions that meet business needs.
Implement best practices for data governance, security, and quality assurance to ensure the integrity and reliability of our data assets.
Optimize and tune existing data pipelines and processes for improved performance and efficiency.
Stay up to date with current best practices in the big data field.
Mentor and coach other members of the data engineering team, providing guidance and support to help them grow their skills and expertise.
Requirements:
5+ years of experience working as a data engineer or similar role, with a proven track record of designing and implementing data solutions MUST.
Proficiency in programming languages such as Python, Java, or Scala, with experience building and maintaining data pipelines using frameworks like Apache Spark, Apache Kafka, or similar.
Strong SQL skills and experience working with relational and non-relational databases (e.g., MySQL, MongoDB).
Hands-on experience with cloud platforms such as AWS, GCP, or Azure, and familiarity with services like AWS Glue, Google BigQuery, or Azure Data Factory.
Hands-on experience with ETL/ELT processes, data ingestion, data transformation, data modeling, and monitoring.
Deep experience of one or more of these technologies: Airflow, dbt, Snowflake, Presto / Trino, Kafka, SQS, Sagemaker, Kubernetes, Argo, Terraform, Debezium.
Strong communication and collaboration skills, with the ability to effectively work with cross-functional teams and communicate technical concepts to non-technical stakeholders.
Quick learner, team player, independent, imaginative and motivated individual with a passion for data!
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7682320
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
20/03/2024
חברה חסויה
Job Type: Full Time
We are looking for a Senior Data Engineer.
What Youll Do:
Create new data solutions, maintain existing and be a focal point for all technical aspects of our data activity. You will develop advanced data and analytics to support our analysts and production with validated and reliable data. The ideal candidate is a hands-on professional with strong knowledge of data pipelines, and an ability to translate business needs into flawless data flow.
Create ELT/Streaming processes and SQL queries to bring data to/from the data warehouse and other data sources.
Own the data lake pipelines, maintenance, improvements, and schema.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with various stakeholders across the company, like data developers, analysts, data science, etc., to deliver team tasks. Work closely with all business units and engineering teams to develop a long-term data platform architecture strategy.
Implement new tools and development approaches.
Ensure adherence to coding best practices and development of reusable code
Constantly monitor data platform and make recommendations to enhance system architecture On both ETL\ELT and real-time pipelines .

Requirements:
4+ years of experience as a Data Engineer
4+ years of direct experience with SQL (e.g., Redshift/Postgres/MySQL, Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines - MUST
2+ years of Python
3+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
Experience working with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, spark-streaming, DBT, Airflow)
Exceptional troubleshooting and problem-solving abilities, debugging, and root-causing defects in large-scale systems.
Deep understanding of distributed data processing architecture and tools such as Kafka, Spark, and Airflow
Experience with design patterns and coding best practices, understanding of data modeling concepts, techniques, and best practices
Proficiency with modern source control systems, especially Git
Basic Linux/Unix system administration skills
Nice to have:
BS or MS degree in Computer Science or a related technical field - An advantage
Experience with data warehouses
NoSQL, Large scale DBs.
Understanding fintech business processes
DataOps - AWS.
Microservices
Experience in DBT .
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7661086
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
07/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
On our journey, we have discovered that building AI that delivers value in the real world, requires much more than just an algorithm - it requires mastering the data layers the algorithms are trained and evaluated on. AI organization, the data development achievements of our group are at the heart of improvements in our AI accuracy and scalability. We are among the few companies paving the way for methodologies and architectural patterns (that don't exist in this field yet) as we go.

Our AI Data Development Group is the data analytics core in our AI algorithms development process, and has several main areas of responsibility:

Mining the optimal data for our datasets, based on a deep understanding of the nuances of medical data, medical workflows, and algorithms.
Evaluating our algorithms performances, in order to maximize their success in real-world environments.
Facilitating data-driven product decisions through in-depth analysis.
Technical project management of the dataset development process.
Responsibilities
End-to-end ownership and technical project management of dataset development projects, for training and evaluation of AI algorithms, from ideation to delivery. This responsibility includes:
The facilitation of the data across various teams.
Detecting errors and anomalies in the data, defining the most efficient and accurate tables and views per project, and researching these issues to verify the quality signature of the dataset, which will have a sound effect on the product outcome.
Developing the optimal data pipeline per project.
Facilitating data-driven product decisions, by deeply understanding the product questions and identifying whether there are cost-effective data analyses that could provide a valuable answer to this question, based on a deep understanding of the relevant clinical workflows and algorithms.
Mapping bottlenecks in existing processes, and leading development of tools, infrastructure, and mechanisms to enhance our efficiency and decision-making processes at all levels.
Continuously assessing the projects risks and successfully mitigating them.
Develop and execute control and review processes for improving the teams quality signature on deliverables.
Requirements:
B.Sc in Engineering or Exact Science (M.Sc. or Ph.D - A major advantage).
Industry experience in hands-on data analysis/science :
Extensive hands-on experience with Python (pandas, numpy, matplotlib) and SQL.
Experience in analytics engineering - Developing data streams using DBT, Airflow and similar tools.
Ability to summarize and visualize complex data and insights.
Experience with presenting data insights to management, for healthy data-driven decision-making processes.
Hands-on experience in developing data analysis tools and methodologies in a R&D/production environment.
An amazing data analyst or scientist, with a proven record of excellence.
Experience with the realm of machine learning algorithms and AI development - A major advantage.
Great communication skills, and ability to create and maintain positive relationships with a diverse set of stakeholders
Problem-solving approach - a team player with the ability to work with stakeholders, translating their needs into requirements, and implementing the right solution.
Experience in project management and/or proven leadership skills.
Strong passion for the medical field.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7683139
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
02/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time and Hybrid work
We are seeking an experienced Data Engineer to join our dynamic team of analytics experts. In this role, you will play a pivotal part in both data ops and data engineering functions, contributing to the enhancement and maintenance of our data integration and pipeline processes, as well as ensuring the robustness and scalability of our data platform.
The ideal candidate is someone who thrives in a fast-paced environment, adept at both data engineering practices and DevOps principles.
You should be passionate about optimizing data systems and possess a strong desire to continuously improve our companys data architecture to support evolving products and data initiatives.
In this role, you will be responsible for:
Develop and maintain ETL/ELT/Streaming processes and SQL queries for efficient data movement.
Design and implement scalable, automated processes for large-scale data analyses.
Engage with stakeholders to understand data requirements and build datasets accordingly.
Collaborate with teams to enhance data models and promote data-driven decision-making.
Contribute to the long-term strategy and architecture of the data platform.
Maintain data lake pipelines and ensure adherence to schema standards.
Apply data security standards and seek ways to optimize data flow.
Serve as the point of contact to the DevOps team and handle MLOps operations.
Requirements:
4+ years of experience as a Data Engineer or similar role.
3+ years of Python development experience.
Bachelors or Masters degree in Computer Science or related field.
Proficiency in SQL, data modeling, and building ELT/ETL pipelines.
Experience with AWS cloud environment and big data technologies.
Proficiency in Kubernetes and containerization technologies.
Familiarity with Kafka, Airflow, and DBT is desirable.
Experience with at least one big data environment such as Snowflake, Vertica, or Redshift.
Excellent analytical skills and experience with creating pipelines.
Familiarity with AWS Sagemaker or other ML platforms is advantageous.
Serve as the point of contact to the DevOps team and handle MLOps operations.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7676698
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
14/04/2024
Location: Tel Aviv-Yafo
Job Type: Full Time
Required Experienced Software Engineer, Demand
What are some of the things you do on a day-to-day basis?
Develop one of the largest real time big data operation in the world to support over 40TB of new data every day
Have end to end ownership: Design, build, ship, measure and maintain our frontend and backend services
Collaborate with the brightest software engineers team members
Influence directly on the way billions of people discover the internet
Develop at unimaginable scale, serving hundreds of requests per second
Work on innovative projects that are the next growth engines for us
Here are some of the things software developers in our group did in the last several months:
A unique data pipeline for processing and managing user data signals
Develop a complete user identities graph and find deterministic and probabilistic methods to enrich it
An algorithmic model to predict the performance potential of a specific audience
Infrastructure to create unique audiences for advertisers based on various signals (behavioral, contextual, etc)
An A/B tests system to explore variations and experiments over audiences
Understand and use big data algorithms to estimate key business metrics over billions of impressions
Build user interface using React that serve thousands of users
And many more
Our Tech Stack:
Java, JS, TS, Python, React, Spark, Kafka, Hadoop, Cassandra, Vertica, ES, MySQL, Memcached, HDFS, BigQuery, Kusto, Docker, K8S, Linux, Prometheus, Grafana, Airflow.
Requirements:
3+ years programming experience in Java/C#/Python/C++/JS
Production systems understanding (system architecture of web products)
Fearlessness to dive into what you dont know
Passion for solving problems, and working very close to the business
BSc in computer science or equivalent
Experience with SQL and No-SQL advantage
Willing to work intensively to gain fast results in an unknown field
Product driven
A pragmatic attitude toward decision-making (avoiding analysis paralysis)
Strong analytical skills
It would be great if you also have:
Experience developing large scale distributed systems
Experienced with Kafka/docker/k8s
Deep understanding with web systems (API / REST / NGINX)
Experience with SQL and NoSQL (mysql / Vertica / Cassandra)
Experience in Big Data tools (BQ / zeppelin etc)
Knowledge in algorithms, data-mining and machine learning
React /Web/JS experience.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7692377
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
02/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are seeking an experienced DataOps Engineer (the first position in our data organization) to join our dynamic team of analytics experts. In this role, you will play a pivotal part in both data ops and data engineering functions, contributing to the enhancement and maintenance of our data integration and pipeline processes, as well as ensuring the robustness and scalability of our data platform.
The ideal candidate is someone who thrives in a fast-paced environment, adept at both data engineering practices and DevOps principles. You should be passionate about optimizing data systems and possess a strong desire to continuously improve our companys data architecture to support evolving products and data initiatives.
In this role, you will be responsible for:
Develop and maintain ETL/ELT/Streaming processes and SQL queries for efficient data movement.
Design and implement scalable, automated processes for large-scale data analyses.
Engage with stakeholders to understand data requirements and build datasets accordingly.
Collaborate with teams to enhance data models and promote data-driven decision-making.
Contribute to the long-term strategy and architecture of the data platform.
Maintain data lake pipelines and ensure adherence to schema standards.
Apply data security standards and seek ways to optimize data flow.
Serve as the point of contact to the DevOps team and handle MLOps operations.
Requirements:
4+ years of experience as a Data Ops Engineer or similar role.
3+ years of Python development experience.
Bachelors or Masters degree in Computer Science or related field.
Proficiency in SQL, data modeling, and building ELT/ETL pipelines.
Experience with AWS cloud environment and big data technologies.
Proficiency in Kubernetes and containerization technologies.
Familiarity with Kafka, Airflow, and DBT is desirable.
Experience with at least one big data environment such as Snowflake, Vertica, or Redshift.
Excellent analytical skills and experience with creating pipelines.
Familiarity with AWS Sagemaker or other ML platforms is advantageous.
Serve as the point of contact to the DevOps team and handle MLOps operations.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7676693
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
07/04/2024
Location: Tel Aviv-Yafo
Job Type: Full Time
As an AI Solution Engineer, you will play a critical role in ensuring our AI solutions optimal performance, reliability, and continuous improvement. You will be responsible for monitoring the performance of AI models, detecting data drift, deploying AI modules, designing customized solutions, investigating issues, and developing tools to support data analysis. This role requires expertise in data science, system engineering, analytics, statistical analysis,product and clinical understanding, and proficiency in Python and SQL. The ideal candidate holds a relevant degree, with a master's degree being an advantage, and possesses hands-on experience managing AI solutions in an operational production environment.

Responsibilities
Deploying AI modules to production:

Collaborate with cross-functional teams to integrate AI solutions into existing workflows.
Deploy, configure, and implement hundreds of AI modules yearly using our orchestration mechanisms.
Design scalable deployment methodologies for new AI products
Address complex challenges associated with data in operational production environments, such as data quality, availability, and integration.
Collaborate with different stakeholders across the organization to make sure production requirements are addressed when new modules and products are developed.
Maintain our AI solutions in top performance:

Become a crucial part of our Ops team and work in a live operational environment, finding real-time solutions to complex problems.
Monitor the performance of AI solutions, ensuring they deliver accurate and reliable results using existing monitors.
Implement mechanisms to monitor and detect data drift to ensure AI models remain effective over time.
Design new monitoring solutions and metrics to address production issues, data drifts, and performance degradation.
Investigate issues raised by our customers and identify root causes, recommend corrective actions, and implement solutions to prevent reoccurrence
Tool Development:

Develop tools and scripts to support data analysts and ensure efficient data processing and analysis.
Visualize data in informative and to the point visualizations.
Automate tasks related to AI deployment, data validation, and reporting.
Requirements:
Bachelor's degree in a relevant field (Computer Science, Data Science, Engineering, and other related STEM fields). A Master's degree is an advantage.
At least 3 years of proven experience in data science, and data analytics in a professional setting.
Experience as a system engineer or technical project manager is an advantage
2-3 years of Experience in AI operations, deployment of AI modules, and productionization of AI is an advantage.
Strong proficiency in Python and SQL for data analysis, scripting, and automation.
Familiarity with statistical analysis, machine learning concepts, and AI model evaluation.
Exceptional problem-solving skills and the ability to troubleshoot complex technical issues.
Strong collaboration and communication skills to work effectively with cross-functional teams.
Analytical mindset and attention to detail to ensure AI solutions deliver reliable and accurate results.
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7683146
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
04/04/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a talented Data Practitioner to join our team and revolutionize the world of external data! In this role, you will be in charge of designing, implementing, testing, deploying, and maintaining our data products, which include data pipelines and ETL processes.
Additionally, you'll harness cutting-edge data science and engineering technologies, such as DBT, Databricks, and Large Language Models (LLMs), to create novel solutions that address complex business challenges.
This role demands strong technical skills alongside a deep understanding of various business contexts, making it ideal for someone passionate about employing the latest technologies to produce meaningful and impactful business outcomes.
Responsibilities:
Lead the development of data products, effectively translating business needs into actionable technical specifications.
Employ cutting-edge technologies including DBT, LLMs, and GenAI to push forward our data science and engineering practices.
Take charge of the entire data product lifecycle, from the creation to the maintenance of ETLs and real-time data pipelines.
Use data science tools to extract actionable insights and valuable business indicators from large datasets.
Ensure high reliability and performance of production systems through robust testing, monitoring, and alerting procedures.
Conduct data analysis across various sources, bringing valuable insights to products and catering to sectors like manufacturing, CPG, and SaaS.
Requirements:
At least 1 year of production-level experience in SQL and Python development - Must
A BSc or BA in Computer Science, a similar field, or being alumni of an IDF technology unit - Must
Demonstrated proficiency in data analysis and system evaluation, with the ability to design experiments that inform product decisions - Must
Significant experience in data processing, especially in maintaining production ETLs - Big Advantage
Experience with building Machine Learning models or LLMs-related experience - Advantage
Experience with Databases, SQL and NoSQL, and Data modeling - Advantage
Familiarity with the latest data tools and platforms, such as Databricks and DBT - Advantage
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7680045
סגור
שירות זה פתוח ללקוחות VIP בלבד
סגור
דיווח על תוכן לא הולם או מפלה
מה השם שלך?
תיאור
שליחה
סגור
v נשלח
תודה על שיתוף הפעולה
מודים לך שלקחת חלק בשיפור התוכן שלנו :)
 
נאספה מאתר אינטרנט
24/03/2024
חברה חסויה
Location: Tel Aviv-Yafo
Job Type: Full Time
We are looking for a highly motivated, experienced Software Team Lead with strong backend roots to join our R&D team. You will lead and mentor a team of talented backend-oriented full stack engineers, and design and build a resilient system that manages our customers' funds, enabling top-notch consumer-facing applications for hundreds of thousands of users.

You will also work closely with the product and BI teams to constantly improve our service based on a data-driven approach along with direct feedback from our customers. The team is responsible for the core company features - building and maintaining highly available financial services, while taking into account high data integrity and security standards.

What you will get:

Scale and Challenge - Work on a highly available, complex and cloud-native application that is used by millions of customers globally, handling millions of transactions every month.
Getting into the Fintech industry - You will get the opportunity to gain deep knowledge in the Fintech industry including payments, banking systems, fraud, and much more.
Sharpen your management skills - Working with our strong leaders you will have the opportunity to sharpen your skills, such as developing, debugging, monitoring, planning, agile and cross-team collaboration.
Leading a team of A-players - The team you are going to lead consists of strong engineers, hence you will be able to leverage and improve your technical skills as well.
And most important: purpose - Get rewarded by developing a product for real people who brings them one step forward to financial inclusion.
Responsibilities:

Lead a team of skilled software engineers building complex flows and business logic in highly available financial services, while taking into account high data integrity and security standards.
Take responsibility over a widely used product, running initiatives from planning, execution, and monitoring in production, and continuously improve the process, in order to create value for our customers.
Take a hands-on approach, contributing to the code and the product.
Define and track the KPIs of the team.
Work closely with different stakeholders, such as product managers, BI, tech support, finance team, and operations.
Work closely with the product team in order to consistently improve our service based on a data-driven approach as well as direct feedback from our customers.
Drive and maintain high engineering standards, for execution, code quality, and customer satisfaction.
job id:R_100745
Requirements:
8+ years of hands-on experience in building production-grade products
3+ years of experience leading software development teams
Deep knowledge of several programming languages such as Python, Go, Java, Scala (preferably Javascript and Typescript)
Vast hands-on experience and acquired best practices with large-scale systems in production using backend frameworks (preferably Node.js), cloud services (preferably GCP services), and system architectures (both microservices and monolithic)
Hands-on experience and knowledge of frontend frameworks such as Angular, Vue (preferably React or React Native)
Great data processing skills - data warehouse, SQL databases, distributed databases, and performance tuning of data processes
BSc. in computer science/software engineering from a high-end university or comparable experience
People person - being able to lead a team of engineers and mentor them to grow and fulfill their career goals
Proactive mindset, pushing for team and personal excellence
This position is open to all candidates.
 
Show more...
הגשת מועמדותהגש מועמדות
עדכון קורות החיים לפני שליחה
עדכון קורות החיים לפני שליחה
7664923
סגור
שירות זה פתוח ללקוחות VIP בלבד