Required Data Analytics Engineer
Where does this role fit in our vision?
Every role at our company is designed with a clear purposeto integrate collective efforts into our shared success, functioning as pieces of a collective brain. We are seeking a skilled Data Analytics Engineer to join our dynamic Analytics team. In this role, you will play a critical part in building, maintaining, and scaling our data infrastructure to support business intelligence, product analytics, and business operations. You will collaborate closely with analysts, data scientists, and cross-functional stakeholders to ensure clean, reliable, and efficient data pipelines. This is an opportunity to contribute to a data-driven culture and help drive actionable insights across the company.
What will you be responsible for?
Designing, building, and optimizing robust data pipelines to ingest, process, and store data from various sources
Maintaining and optimizing our data warehouse (e.g., Snowflake, BigQuery, Redshift) and building data models to ensure scalability, reliability, and performance
Developing and maintaining ETL/ELT workflows to enable data accessibility for reporting and analysis
Collaborating with analytics teams (BI, product, and business) and cross-functional teams (GTM, product) to understand data requirements and providing the necessary infrastructure and data models to support their objectives
Monitoring and troubleshooting data quality, pipeline failures, and performance issues, and implementing fixes and improvements as needed
Contributing to automating manual processes and improving data reliability and efficiency
Staying up-to-date with emerging trends, tools, and technologies in data engineering to drive innovation and continuous improvement.
Requirements: 3+ years of experience in a data engineering role, with a proven track record of building and managing data pipelines and infrastructure.
Good understanding of data warehousing concepts such as dimensional models, database design, and data modeling
Strong business understanding and ability to analyze data
Strong proficiency with SQL for data manipulation and querying.
Hands-on experience with ETL/ELT tools (e.g., dbt, Apache Airflow)
Experience with cloud platforms such as AWS, GCP, or Azure, including data-related services (e.g., S3, Redshift, BigQuery, Data Lake, Snowflake).
Familiarity with programming languages such as Python
Knowledge of tools and frameworks for big data processing (e.g., Apache Spark, Kafka, Databricks) is a plus.
Strong problem-solving skills, attention to detail, and ability to work independently and collaboratively in a fast-paced environment
Excellent communication and interpersonal skills.
AI-savvy: comfortable working with AI tools and staying ahead of emerging trend.
This position is open to all candidates.