Skip to content

A banking chatbot with semantic routing and slot-filling orchestration. Built with FastAPI, LangGraph, RedisVL Semantic Router, LangChain tools, and Next.js frontend.

License

Notifications You must be signed in to change notification settings

redis-developer/banking-agent-semantic-routing-demo

Repository files navigation

Banking Agent Demo with Semantic Routing

Virtual banking Agent demonstrates how semantic routing can intelligently route queries to the right tools based on the meaning of the user query without relying on expensive models which in turn saves token costs and reduces latency.


Table of Contents


Demo Objectives

  • Demonstrate semantic intent routing using RedisVL
  • Showcase Redis message history for contextual chat
  • Show agentic orchestration with LangGraph
  • Illustrate tool execution using LangChain tools

Setup

Dependencies

  • Python 3.11+
  • Node.js 18+
  • Docker (for Redis Stack)

Configuration

  1. Clone the repository:
git clone <repository-url>
cd banking-agent-semantic-routing-demo
  1. Create a .env file in the project root:
OPENAI_API_KEY=your_openai_api_key_here
REDIS_URL=redis://localhost:6380
HISTORY_INDEX=bank:msg:index
HISTORY_NAMESPACE=bank:chat
HISTORY_TOPK_RECENT=8
HISTORY_TOPK_RELEVANT=6
HISTORY_DISTANCE_THRESHOLD=0.35

Running the Demo

Option 1: Docker Setup (Recommended)

# Start all services with Docker
docker-compose up --build

# Access the application
# Frontend: http://localhost:3000
# Backend: http://localhost:8000
# RedisInsight: http://localhost:8001

Option 2: Manual Setup

1. Install Python Dependencies

# Create virtual environment with Python 3.11
python3.11 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

2. Start Redis Stack

# Option A: Docker (Recommended)
docker run -d --name redis-stack -p 6380:6379 -p 8001:8001 redis/redis-stack:latest

# Option B: Homebrew (macOS)
brew tap redis-stack/redis-stack
brew install redis-stack
redis-stack-server --daemonize yes

3. Run the Backend

# Make sure virtual environment is activated
source .venv/bin/activate

# Start FastAPI server
python3 -m uvicorn main:app --reload --port 8000

Backend will be available at http://localhost:8000

5. Run the Frontend

cd nextjs-app
npm install
npm run dev

Frontend will be available at http://localhost:3000


Architecture

  • Semantic Routing (RedisVL): Routes queries to appropriate banking intents (loans, cards, FD, forex, etc.)
  • Slot-Filling Orchestration (LangGraph): Manages conversation state and collects required information
  • Tool Execution (LangChain): Executes banking operations (EMI calculation, card recommendations, etc.)
  • Modern Frontend (Next.js 14 + TypeScript + Tailwind): Responsive banking UI with chat interface
  • Conversation Memory (RedisVL MessageHistory): Structured conversation tracking

Architecture Flow

User Query
    ↓
[Semantic Router] → Intent + Confidence + Required Slots
    ↓
[Parse Slots] → Extract values from text using LLM
    ↓
[Decide Next]
    ├→ Missing slots? → Ask follow-up question
    └→ All slots filled? → Call Tool
         ↓
    [Tool Execution] → Calculate/Recommend/Search
         ↓
    [Summarize] → Format response with bullets
         ↓
    Response to User
         ↓
    [Feedback System] → User rates helpfulness
         ↓
    [Memory Management] → Clear conversation if helpful
image

API Endpoints

POST /chat

Intelligent chat endpoint with semantic routing and slot-filling.

Request Body:

{
  "userId": "optional_user_id",
  "sessionId": "optional_session_id", 
  "text": "I need a loan",
  "meta": {}
}

Response:

{
  "reply": "What loan amount are you looking for?",
  "pending": ["loan_amount", "tenure_months"],
  "router": {
    "intent": "loan",
    "confidence": "high",
    "score": 0.92
  },
  "proposal": null,
  "showFeedback": false,
  "model": "gpt-3.5-turbo"
}

POST /chat/feedback

User feedback endpoint for conversation management.

Request Body:

{
  "sessionId": "session_xyz",
  "helpful": true
}

Response:

{
  "ok": true,
  "message": "Thank you! Conversation cleared for a fresh start.",
  "cleared": true
}

Example Conversations

Loan EMI Calculation

User: I need a personal loan
Assistant: What loan amount are you looking for?
User: 5 lakhs for 3 years  
Assistant: Your EMI will be ₹16,134 per month for 36 months.
- Monthly EMI: ₹16,134
- Total Amount Payable: ₹5,80,832
- Total Interest: ₹80,832
- Principal: ₹5,00,000

Was this helpful? [ Yes] [ No]

Credit Card Recommendation

User: I want a credit card
Assistant: What is your annual income?
User: 8 lakhs per year
Assistant: Based on your income of ₹8,00,000, we recommend the DemoBank Travel Elite.
- Annual Fee: ₹2,999
- 5X rewards on travel
- Airport lounge access

Testing

Test Router

python3 router_bank.py

Test Orchestrator

python3 orchestrator.py

Test API

curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"text": "I need a personal loan"}'

Test System

python3 test_system.py

Resources


Maintainers


License

This project is licensed under the MIT License.

About

A banking chatbot with semantic routing and slot-filling orchestration. Built with FastAPI, LangGraph, RedisVL Semantic Router, LangChain tools, and Next.js frontend.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published