we are seeking an innovative and experienced Senior Software Developer to join our company's AI team. This team is pioneering a new wave of application security solutions focused on identifying risks in AI-powered software. If youre passionate about software security, skilled in backend engineering, and intrigued by the intersection of AI/LLMs and security, we want you on our team.
The company's AI product scans application source code to detect embedded AI/ML models, libraries and framework, understand their behavior, and surface potential security risks; from data leakage to prompt injection vulnerabilities. This is a high-impact role that blends backend engineering with cutting-edge AI applications, helping organizations secure the next generation of software.
**Work in a hybrid model: Two days from the office, three days from home**
Responsibilities
Design and implement backend systems that scan source code, identify AI/ML components, and evaluate associated security risks.
Build intelligent detection logic to recognize large language models (LLMs), machine learning models, and custom pipelines in codebases.
Collaborate with security researchers, data scientists, and product managers to integrate AI risk models into our core platform.
Develop scalable and maintainable services in Java and Go.
Translate research prototypes into production-grade features, with attention to performance, resilience, and accuracy.
Conduct code reviews, maintain high development standards, and mentor team members.
Requirements: 5+ years of professional experience in backend software development.
Deep knowledge and hands-on experience with Java, Python or Go -Must.
Strong understanding of cloud-native architectures (AWS/GCP/Azure), distributed systems, and DevOps best practices.
Experience developing security products or tools, or a strong interest and understanding of application security.
Understanding of modern AI/LLM use cases in software (e.g., embedding models in services, integrating model APIs, etc.).
Hands-on with AI coding assistants such as CodeRabbit, Cursor, Copilot, or similar.
Knowledge of secure software development practices, including threat modeling and vulnerability mitigation.
Nice to Have
Experience working with or analyzing LLMs (e.g., GPT, LLaMA, Claude) in real-world applications.
Familiarity with AI security challenges like prompt injection, model leakage, and adversarial examples.
Familiarity with machine learning workflows, especially how models are trained, stored, and deployed in real-world applications.
Experience using tools such as LangChain, Hugging Face, OpenAI SDKs, or other AI ecosystems.
Previous work in the application/cloud security or DevSecOps domain.
This position is open to all candidates.