We are At our company, we're building the financial infrastructure that powers global innovation. With our cutting-edge suite of Embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely. With 900+ employees worldwide and an R&D center of over 160 employees in Jerusalem - were reshaping how financial technology is developed and delivered..
The Role:
We are looking for a backend engineer who can design, build, and operate highly reliable Node.js services on AWS that enable generativeAI capabilities across our products and internal workflows. You will create scalable APIs, data pipelines, and serverless architectures that integrate largelanguagemodel (LLM) services such as Amazon Bedrock, OpenAI, and opensource models, enabling teams to safely and efficiently leverage generative AI.
Who You Are:
* You have experience building RetrievalAugmented Generation (RAG) systems or knowledgebase chatbots.
* You're Handson with vector databases such as Pinecone, Chroma, or pgvector on Postgres/Aurora.
* Have AWS certification ( Developer, Solutions Architect, or Machine Learning Specialty).
* Experience with observability tooling (Datadog, New Relic) and costoptimization strategies for AI workloads.
* Background in microservices, domaindriven design, or eventsourcing patterns.
What Youll Actually Be Doing:
* Design and implement REST/GraphQL APIs in Node.js /TypeScript to serve generativeAI features such as chat, summarization, and content generation.
* Build and maintain AWSnative architectures using Lambda, API Gateway, ECS/Fargate, DynamoDB, S3, and Step Functions.
* Integrate and orchestrate LLM services (Amazon Bedrock, OpenAI, selfhosted models) and vector databases (Amazon Aurora pgvector, Pinecone, Chroma) to power RetrievalAugmented Generation (RAG) pipelines.
* Create secure, observable, and costefficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline.
* Implement monitoring, tracing, and logging (CloudWatch, XRay, OpenTelemetry) to track latency, cost, and output quality of AI endpoints.
* Collaborate with ML engineers, product managers, and frontend teams in agile sprints; participate in design reviews and knowledgesharing sessions.
* Establish best practices for prompt engineering, model evaluation, and data governance to ensure responsible AI usage.
Why Youll Love Working Here:
Were a collaborative, humble, and fast-paced team that takes pride in building real solutions to real problems. Youll work on meaningful products that empower businesses and people, alongside curious, supportive teammates who are always up for a challenge. Innovation and ownership are part of our DNAand were just getting started.
Next Step:
Hit Apply. Bring your AI vibes. Well bring the challenge - and the snacks.
Requirements: What You Bring to the Table Available working some US hours Proficient in Hebrew and English both written and verbal, sufficient for achieving consensus and success in a remote and largely asynchronous work environment - Must
* 4+ years professional experience building production services with Node.js /TypeScript.
* 3+ years handson with AWS, including Lambda, API Gateway, DynamoDB, and at least one container service (ECS, EKS, or Fargate).
* Experience integrating thirdparty or cloudnative LLM services (e.g., Amazon Bedrock, OpenAI API) into production systems.
* Strong understanding of RESTful design, GraphQL fundamentals, and eventdriven architectures (SNS/SQS, EventBridge).
* Proficiency with infrastructureascode (AWS CDK, Terraform, or CloudFormation) and CI/CD pipelines.
* Familiarity with secure coding, authentication/authorization patterns (Cognito, OAuth), and data privacy best practices for AI workloads. Technical Environment:
* Languages: TypeScript, J
This position is open to all candidates.