One of the fastest-growing global D2C startups with a self-built e-commerce platform that allows us to leverage data and optimize marketing funnels to bring clinically tested body care products to women worldwide. With years of rapid growth and millions of customers worldwide. We are on track to become one of the biggest body care brands in the world. The company has built a strong tech backbone to leverage its data, collected over the span of dozens of million of $ in online advertising spend. The data analysis and predictive modeling guide the companys decision making and play a central role in its success, creating a unique competitive edge over the industry and other e-commerce players. We are a team of people who want to make lasting, impactful changes, who live (and love) to see results, who innovate with passion, and who love a good sense of humor. Is our vibe your vibe? We are currently looking for a logical and independent data Engineer to join its exceptional data Engineering Team. This cross-functional role works closely with analysts and the R&D team to support cross-departmental operations. The ideal candidate understands exactly how data structuring impacts macro-business decisions and can easily translate between technical and practical terms.
* Create and structure end-to-end data pipelines and ETLs, ensuring analysts have the exact data they need to make smart, data -driven business decisions.
* Build scalable multi-step processes that integrate over 50 data sources, including Marketing, Operations, CS, Product, and CRM.
* Connect and leverage AI tools to build active business tools that drive critical company decisions.
* Proactively address data weaknesses while managing high-standard monitoring and reliability processes.
Requirements: You will thrive in this role if you have
* B.A. or B.Sc. degree in a highly quantitative field.
* 4+ years of hands-on experience as a data Engineer, or a solid Software Engineering background with a strong desire to pivot into data. (While we value BI experience, were specifically looking for candidates with recent, hands-on data engineering experience).
* Strong SQL skills for querying data warehouses and structuring data processes.
* Familiarity with PowerBI.
* Strong Python for data work (pandas, requests).
* Production experience with an orchestrator - Airflow strongly preferred.
* Comfort working with REST APIs (OAuth, pagination, rate limits, retries).
* Linux, Git, Docker.
* High attention to detail, a fast learner, and a proven ability to multitask.
* Fluent English. Nice to Have
* Hands-on experience with BigQuery.
* Streaming / CDC experience (Kafka, Kafka Connect, Redpanda, Debezium, Avro / Schema Registry).
* Stream processing with RisingWave, Apache Flink, or equivalent.
* GCP (Cloud Storage, IAM, Compute Engine, Cloud Run).
* Experience with LLM / MCP-powered internal tooling.
* SQL server or SSAS background.
* Experience analyzing data to find valuable insights, preferably within an online or D2C company.
This position is open to all candidates.