we're building the financial infrastructure that powers global innovation. With our cutting-edge suite of Embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely. With 900+ employees worldwide and an R&D center of over 160 employees in Jerusalem - were reshaping how financial technology is developed and delivered..
The Role:
As AI capabilities accelerate across the bank, we need an engineer to design and enforce safe AI usage-protecting customer data, preserving model integrity, and meeting our regulatory obligations. You'll be the architect of guardrails, tooling, and policies that make AI both secure and useful for product and internal teams. This isn't about slowing things down; it's about building the trust layer that lets innovation move fast without breaking things.
Who You Are:
You're a security engineer who's excited about the AI wave-someone who sees GenAI and LLMs as fascinating puzzles to secure, not just threats to mitigate. You've spent 5+ years in Security Engineering, AppSec, or Cloud Security, and at least 1-2 of those years have been spent getting your hands dirty with AI/ML or data -intensive systems. You're equally comfortable dissecting a prompt injection attack as you are writing a Terraform module or shipping a Python library. You know your way around AWS and/or Azure, modern app stacks ( Python /TypeScript, REST/gRPC, containers/Kubernetes), and can translate security requirements into Developer -friendly tooling-not just PDF policies that gather dust. You communicate clearly in English and Hebrew, thrive in regulated environments, and understand that security in financial services means mapping controls to frameworks like FFIEC, SOC 2, and PCI DSS-and actually having the evidence to prove it.
What Youll Actually Be Doing:
* Design enterprise AI guardrails across Azure and AWS (e.g., Azure AI Studio/Azure OpenAI, Amazon Bedrock/SageMaker): content filtering, PII redaction, prompt/response validation, and policy enforcement services.
* Implement data minimization controls for GenAI/RAG workloads: context filtering, leastprivileged retrieval, document-level ACL enforcement, vector store hardening, and secure token/secret handling.
* Threat model AI systems (apps, agents, RAG, fine-tuning pipelines) using frameworks like STRIDE and the OWASP Top 10 for LLM Apps; define misuse scenarios (prompt injection/jailbreaks/ data exfiltration) and build mitigations.
* Build monitoring and telemetry: privacy-preserving prompt/response logging, sensitive- data detection, safety/eval dashboards, drift/abuse signals, and incident hooks into our SIEM.
* Integrate AI security into the SDLC: reusable libraries, pre-commit checks, CI/CD gates, policy-as-code, and secure-by-default reference architectures for product teams.
* Evaluate thirdparty AI vendors and internal apps: security reviews, data residency and retention requirements, SSO/SCIM integrations, DPA/TPRM inputs, and continuous control testing.
* Partner across Security, data, Privacy, and Engineering to map AI controls to FFIEC, SOC 2, and PCI DSS; document control evidence for audits.
* Lead/participate in AI redteaming: automated jailbreak/promptinjection tests, safety benchmarks, purpleteam exercises, and response playbooks for AI incidents.
* Enable the org with concise guidelines, examples, and training on safe AI development and usage.
Why Youll Love Working Here:
* Flexible hybrid work model: three days a week at our Jerusalem office
* Monthly wellness reimbursement - from therapy to gel manicure, it's up to you
* Full Keren Hishtalmut, private health and dental insurance
* Volunteer days, donation matching, Yoga and Pilates
* A supportive, collaborative culture that puts our people first
Next Step:
Hit Apply!
Requirements: What You Bring to the Table
* 5+ years
This position is open to all candidates.