We are looking for a AI Security Engineer.
What Youll Do:
Design and implement best-in-class security measures for the entire AI/ML lifecycle from data pipelines and model training to deployment and inference.
Address risks like model theft, data leakage, prompt injection, adversarial attacks, model inversion, deepfake phishing, and misuse of LLMs or vector databases.
Lead threat modeling activities across AI/ML pipelines, RAG platforms, and use cases, and contribute to broader risk assessments and incident response strategies.
Partner with R&D, MLOps, DevSecOps, and SecOps to ensure secure and ethical practices in AI usage, procurement, and model governance. Work with Red Teams to simulate AI-enabled attacks and develop countermeasures.
Develop protections for internal AI agents and LangChain/RAG/AutoGen-based systems. Build and maintain libraries for prompt sanitization, input/output policy enforcement, and adversarial defense. Collaborate with developers to create layered prompt chains resilient to injection and context confusion.
Integrate anomaly detection for prompt payloads, fine-tune poisoning, agent hallucinations, and monitor for AI-specific threats across environments.
Requirements: 5+ years in cybersecurity or application security roles
Deep knowledge of AI security risks, including prompt injections, model theft, adversarial attacks, data leakage, and LLM abuse patterns.
Experience with securing AI agent frameworks (e.g., AutoGen, LangChain, CrewAI) or AI-native apps.
Familiarity with generative AI tools (OpenAI, Claude, Hugging Face, etc.) and integration patterns in cloud/SaaS platforms.
Python proficiency with the ability to contribute secure AI-related tooling.
Familiar with threats like WormGPT, FraudGPT, or BlackMamba.
Nice-to-Have:
Experience with privacy-preserving ML (differential privacy, federated learning)
Experience with MLOps tools: Kubeflow, SageMaker, MLflow, etc.
Cybersecurity certifications (e.g., CISSP, OSCP) or ML credentials
This position is open to all candidates.