Our Networking product security team is looking for an outstanding technical AI safety researcher with hands-on experience to help us improve the safety posture of AI systems and their infrastructure. In this role you will reduce risk, threats, and vulnerabilities in our networking AI products.
Participate in defining and ensuring AI development processes meet safety standards.
Drive hands-on safety research on a wide range of AI and networking products.
Build tools and processes to expose weaknesses in AI systems and preempt threats.
Partner with cross-functional teams to understand needs and implement solutions.
Be a technical focal point across multiple development and networking teams and provide hands-on AI safety and engineering expertise.
Requirements: What we need to see:
Bachelors or Masters Degree program in Computer Science, Computer Engineering, or a related field (or equivalent experience).
Demonstrated experience of 5+ years in AI safety/security.
Proven Python programming expertise.
Excellent in-depth hands-on understanding of NLP, LLM, MLLM, Generative AI, and RAG workflows.
Knowledge of AI vulnerabilities and effective mitigation strategies. Experience with AI safety/security frameworks, compliance, and ethical standards.
Self-starter with a passion for growth, enthusiasm for continuous learning, and sharing findings across the team.
Extremely motivated, highly passionate, and curious about new technologies.
Ways to stand out from the crowd:
Experience delivering software in a cloud context.
Hands-on experience building products, including infrastructure and system design.
Knowledge of MLOps technologies such as Docker, Kubernetes, Helm, etc.
Familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM.
This position is open to all candidates.