Skip to content

A MVP application showing how AI can automate a breakdown claim handling line

Notifications You must be signed in to change notification settings

brucey31/breakdown_service_agent_AI

Repository files navigation

Insurance Co-pilot system.

An intelligent insurance claims processing system that uses LLM agents to handle vehicle breakdown and accident claims through conversational AI.

Components

The system architecture consists of multiple interconnected components:

graph TB
    subgraph "Frontend"
        A[React UI] --> B[Elevenlabs Voice Interface]
    end
    
    subgraph "Backend"
        C[Chainlit Server] --> D[Conversational Agent]
        C --> E[Policy Decision Agent]
        F[FastAPI] --> G[Database]
    end
    
    subgraph "Data & Services"
        H[SQLite DB]
        I[Vector Store]
        J[Google Places]
        K[AWS Bedrock]
    end
    
    A -.->|WebSocket| C
    A -.->|REST| F
    D -.->|LLM| K
    E -.->|LLM| K
    E -.->|Search| I
    E -.->|Find| J
    G -.->|Store| H
Loading

Frontend Components

  • React UI Client: Modern React application with TypeScript
    • ClientView: Main interface for users to interact with the insurance agent.
    • ObserverView: Real-time monitoring interface for observing conversations saved in sqllite db
    • ElevenLabs: Conducts speech-to-text and text-to-speech functionality.

Backend Services

  • Chainlit Backend: WebSocket-based conversational interface
  • FastAPI Endpoint: REST API for data retrieval and health checks
  • Database Persistence: SQLite-based message storage
  • Chroma Vector Store: Document retrieval and policy search

External Integrations

  • AWS Bedrock: LLM inference with Claude models
  • Google Places API: Location-based car repair shop discovery
  • ElevenLabs API: Advanced voice synthesis and speech recognition

LLM Agents

Agent Workflow

sequenceDiagram
    participant U as User
    participant CA as Conversational Agent
    participant PDA as Policy Decision Agent
    participant VS as Vector Store
    participant GP as Google Places
    participant DB as Database
    
    U->>CA: Initial message
    CA->>CA: Gather information
    CA->>U: Ask follow-up questions
    U->>CA: Provide details
    CA->>PDA: Pass structured data
    PDA->>VS: Search policy documents
    VS->>PDA: Return policy info
    PDA->>PDA: Make coverage decision
    PDA->>GP: Find nearest repair shop
    GP->>PDA: Return shop details
    PDA->>DB: Store decision
    PDA->>U: Send SMS notification
Loading

LLM Agent Overview

The system employs two specialized LLM agents that work in sequence to process insurance claims:

1. Conversational Agent

Purpose: Initial customer interaction and information gathering

Model: Claude 3 Haiku Temperature: 0.8 Max Tokens: 3024

Responsibilities: Initial customer interaction and information gathering. See conversational_agent_prompt.py for detailed prompt strategy.

Structured Output Schema: InsuranceClaimOutput

2. Policy Decision Agent

Purpose: Policy coverage assessment and claim processing decisions

Model: Claude 3.7 Sonnet Temperature: 0.9 Max Tokens: 3024

Tools Available:

  • search_for_policy_detail: Vector store search for policy documents
  • find_nearest_car_repair_shop: Google Places API integration

Responsibilities: Policy coverage assessment and claim processing decisions. See policy_decision_agent_prompt.py for detailed prompt strategy.

Structured Output Schema: InsurancePolicyDecisionOutput

Tools: search_for_policy_detail, find_nearest_car_repair_shop, create_sms_user_notification

Tool Integration

This multi-agent architecture ensures thorough information gathering, accurate policy assessment, and appropriate service delivery for insurance claims processing.

About

A MVP application showing how AI can automate a breakdown claim handling line

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published