We are looking for an AI Research Engineer to join our Applied AI team a highly skilled and collaborative group building end-to-end AI solutions powered by machine learning, LLMs, and cutting-edge architectures.
Youll work closely with product, engineering, and data teams to explore, prototype, and productionize advanced AI capabilities in our UGC product line. From semantic search and summarization to personalized recommendations and chat agents your work will shape the future of commerce in the AI era.
Key Responsibilities:
Own the End-to-End AI Lifecycle: Design, train, evaluate, and deploy ML/LLM-based systems from proof-of-concept to robust production infrastructure.
Prototype to Production: Translate cutting-edge AI research into scalable and maintainable production-ready systems, balancing speed and quality.
Business-Driven Problem Solving: Work closely with stakeholders to understand business needs, formulate clear objectives, and solve real-world problems using data and AI.
Collaborative R&D: Partner with engineers, data scientists, product managers, and researchers to deliver cross-functional AI capabilities.
Promote Engineering Standards: Build reliable, monitored, and testable ML pipelines and APIs, using software engineering best practices and MLOps principles.
Mentorship & Technical Leadership: Share expertise, review designs, mentor teammates, and contribute to our growing knowledge base in AI and LLM systems.
AI Platform Evolution: Help shape how AI is built, adopted, and scaled across us including shared infrastructure, tooling, and best practices.
Requirements: MSc with 2+ years or BSc with 4+ years of experience in AI/ML engineering, applied data science, or related fields
Production-grade Python skills and advanced SQL capabilities
Proven experience designing, training, and deploying ML models.
Including tasks such as: Summarization, Semantic Search, Classification, Personalization, Chat agents, etc.
Strong knowledge of ML and GenAI frameworks such as:
PyTorch, HuggingFace Transformers, Langchain, Vector Databases (e.g. FAISS, Pinecone)
Familiarity with LLMOps tooling:
AWS Bedrock, OpenAI APIs, Langfuse, LangSmith, MLFlow, Feature Stores
Exposure to Big Data & Streaming:
Spark, Kafka (Flink is a plus)
Comfort with MLOps and cloud infrastructure:
AWS/GCP, Docker, Kubernetes, CI/CD, monitoring, observability
Understanding of architectural patterns for large-scale software systems, including modularity, fault-tolerance, scalability, and data flow across distributed environments
Excellent communication skills, both technical and non-technical, with a strong ability to explain complex topics clearly
Self-starter with a researcher mindset and a passion for exploring emerging technologies
Technical Skills:
AI Model Design & Evaluation: Experience training models using real-world data, choosing appropriate architectures, and defining metrics for evaluation.
LLM-Oriented Applications: Hands-on experience with prompt engineering, retrieval-augmented generation (RAG), embeddings, and fine-tuning LLMs.
System Design for AI: Understanding of scalable AI architecture patterns, monitoring, and lifecycle management in production environments.
MLOps & DevX: Build systems that are reproducible, observable, testable, and easy to evolve including CI/CD, model versioning, and rollback strategies.
This position is open to all candidates.