Required Senior Performance and Scale Engineer - AI Engineering Tools
What you will do:
Define measurable KPIs / SLOs for throughput, latency, footprint, and cost across all AI Engineering Tools components.
Own and iterate on the performance roadmapfrom micro‑benchmarks to multi‑cluster scale tests.
Champion a performance‑first engineering culture
Formulate performance test plans and execute performance benchmarks to characterize performance, drive improvements, and detect performance issues through data analysis and visualization
Develop and maintain tools, scripts, and automated solutions that streamline performance benchmarking tasks.
Work closely with cross-functional engineering teams to identify and address performance issues. For eg.
RAG: profile vector DBs (PGVector, Milvus) and embedding models, tune ANN indexes and cache paths.
Agentic/MCP: stress‑test agent orchestration graphs, reduce tail latency of multi‑step chains.
Llama Stack: Performance and Capacity Measurement
Partner with DevOps to bake performance gates into GitHub Actions/OpenShift Pipelines.
Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.
Publish results, recommendations, and best practices through internal reports, presentations, external blogs, and official documentation.
Represent the team at internal and external conferences, presenting key findings and strategies.
Requirements: 4+ years in performance engineering or systems‑level software
Basic understanding of AI and LLMs
Hands‑on expertise with Kubernetes/OpenShift
Fluency in Python (data & ML), strong Bash/Linux skills
Exceptional communication skills - able to translate raw performance numbers into customer value and executive narratives
Commitment to open‑source values
Nice to Haves:
Masters or PhD in Computer Science, AI, or a related field
History of upstream contributions and community leadership
Hands‑on expertise with Kubernetes/OpenShift
Familiarity with performance observability stacks such as perf/eBPF‑tools, Nsight Systems, PyTorch Profiler, among others
Practical experience building agentic GenAI applications with orchestration frameworks such as LangChain, LangGraph, MCP.
This position is open to all candidates.