our company, an AI-first company, is revolutionizing how visual content is created. With a mission to bridge the gap between imagination and creation, we are dedicated to bringing cutting-edge technology to the creative and business spaces.
Our advanced AI photo and video generation models, including our open-source LTXV model, power our apps and platforms, including Facetune, Photoleap, Videoleap, and LTX Studio, allowing creators and brands to leverage the latest research breakthroughs, offering endless control over their creative potential. Our influencer marketing platform, Popular Pays, provides creators the ability to monetize their work and offers brands opportunities to scale their content through tailored creator partnerships.
What you will be doing
As an ML Software Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of our companys machine learning inference systems. Youll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. Youll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering our company's next-generation AI products.This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.
Responsibilities
Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
Contribute to ComfyUI and internal infrastructure to improve usability and performance of model execution flows
Investigate performance bottlenecks at all levels of the stackfrom Python to kernel-level execution
Navigate and enhance a large, complex, production-grade codebase
Drive innovation in low-level system design to support future ML workloads.
Requirements: 5+ years of experience in high-performance software engineering
Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
Proven track record of optimizing compute-intensive systems
Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
Collaborative and team-oriented mindset, with experience working across functional teams
Preferred Requirements
Experience with low-level profiling and debugging tools (e.g., Nsight, perf, gdb, VTune)
Familiarity with machine learning frameworks (e.g., PyTorch, TensorRT, ONNX Runtime)
Contributions to performance-critical open-source or ML infrastructure projects
Experience with cloud infrastructure and GPU scheduling at scale.
This position is open to all candidates.