We are seeking a hands-on Applied AI Scientist to join our core R&D team and drive the development of next-generation AI systems for autonomous driving. This role sits at the intersection of applied research and deployment - you will go from reading papers to shipping production systems. You will work directly on our multi-layered autonomy architecture, with a primary focus on real-time predictive models for driving decisions.
A deep technical role for someone who thrives on turning cutting-edge research into real, working systems under hard constraints.
Responsibilities:
Own the research-to-deployment cycle for predictive driving models - from literature review and prototyping through to production integration
Design, implement, and iterate on real-time predictive models, including vision-language models, motion prediction models, and inverse reinforcement learning approaches (e.g., imitation learning, reward recovery)
Collaborate on higher-level reasoning systems, contributing to vision-language-action models that handle complex edge cases and long-horizon planning
Bridge cloud-scale training with edge deployment - work on model compression, quantization, speculative decoding, and efficient inference for embedded automotive platforms
Evaluate and integrate state-of-the-art techniques from the broader AI research community into our autonomy stack
Collaborate closely with internal R&D teams to unblock technical challenges, accelerate delivery, and raise the overall technical bar.
Requirements: Ph.D. in Computer Science, Electrical Engineering, Machine Learning, Robotics, or a related field
Strong publication or deployment track record in one or more of: deep learning, computer vision, reinforcement learning, imitation learning, vision-language models, or motion prediction
Demonstrated ability to go from paper to working implementation - not just theory, but shipped systems
Strong coding skills in Python; experience with C++ is a plus
Familiarity with modern ML infrastructure: PyTorch, distributed training, model optimization
Solid mathematical foundations in probability, optimization, and statistics
Attributes:
Experience with CUDA or low-level GPU optimization
Hands-on work with model quantization, distillation, or efficient inference on edge devices
Background in real-time, safety-critical, or embodied AI systems (robotics, autonomous vehicles, drones, etc.)
Experience with small language models (SLMs) or on-device deployment of foundation models
Familiarity with driving datasets, simulation environments, or sensor fusion pipelines.
This position is open to all candidates.