As a Machine Learning Engineer, youll work on cutting-edge
code-focused LLMs and AI agent systems
that power our companys next-generation developer platform. Youll be at the center of research, model training, and productionization of intelligent systems that understand software deeply, collaborate with developers, and help automate engineering workflows end-to-end. Your work will immediately impact millions of engineers worldwide.
Responsibilities:
Push LLM Innovation: Research, design, and fine-tune domain-specific LLMs for code generation, refactoring, debugging, and multi-turn reasoning.
Agent-Oriented Development: Build multi-agent coding systems that integrate retrieval-augmented generation (RAG), code execution, testing, and tool use to create autonomous, context-aware coding workflows.
Production-Grade AI: Own the training-to-inference pipeline for large code modelsoptimize inference with quantization, distillation, and caching techniques.
Rapid Experimentation: Prototype and validate ideas quickly; leverage reinforcement learning, human feedback, and synthetic data generation to push accuracy and reasoning.
Cross-Functional Collaboration: Partner with product, engineering, and design teams to ship AI-powered features that help developers focus on high-impact work.
Scale the Platform: Contribute to distributed training, scalable serving systems, and GPU/TPU-efficient architectures for ultra-low-latency developer tools.
Requirements: 2+ years of hands-on experience designing, training, and deploying machine-learning models
M.Sc. or higher in Computer Science / Mathematics / Statistics or equivalent from a university, or B.Sc. with strong hands-on ML experience
Practical experience with Natural Language Processing (NLP) and LLMs
Experience with data acquisition, data cleaning, and data pipelines
A passion for building products and helping people, both customers and colleagues
All-around team player, fast, self-learning individual
Nice to have:
3+ years of development experience with a passion for excellence
Experience building AI coding assistants, code reasoning models, or dev-focused LLM agents.
Familiarity with RAG, function-calling, and tool-using LLMs.
Knowledge of model optimizations (quantization, distillation, LoRA, pruning).
Startup or product-driven ML experience, especially in high-scale, latency-sensitive environments.
Contributions to open-source AI or developer tools.
This position is open to all candidates.