|
| 1 | +# Streaming AI Chatbot |
| 2 | + |
| 3 | +A minimal example demonstrating **real-time AI streaming** and **conversation state management** using the Motia framework. |
| 4 | + |
| 5 | + |
| 6 | +## 🚀 Features |
| 7 | + |
| 8 | +- **Real-time AI Streaming**: Token-by-token response generation using OpenAI's streaming API |
| 9 | +- **Live State Management**: Conversation state updates in real-time with message history |
| 10 | +- **Event-driven Architecture**: Clean API → Event → Streaming Response flow |
| 11 | +- **Minimal Complexity**: Maximum impact with just 3 core files |
| 12 | + |
| 13 | +## 📁 Architecture |
| 14 | + |
| 15 | +``` |
| 16 | +streaming-ai-chatbot/ |
| 17 | +├── steps/ |
| 18 | +│ ├── conversation.stream.ts # Real-time conversation state |
| 19 | +│ ├── chat-api.step.ts # Simple chat API endpoint |
| 20 | +│ └── ai-response.step.ts # Streaming AI response handler |
| 21 | +├── package.json # Dependencies |
| 22 | +├── tsconfig.json # TypeScript configuration |
| 23 | +└── README.md # This file |
| 24 | +``` |
| 25 | + |
| 26 | +## 🛠️ Setup |
| 27 | + |
| 28 | +### Installation & Setup |
| 29 | + |
| 30 | +```bash |
| 31 | +# Clone the repository |
| 32 | +git clone https://github.com/patchy631/ai-engineering-hub.git |
| 33 | +cd streaming-ai-chatbot |
| 34 | + |
| 35 | +# Install dependencies |
| 36 | +npm install |
| 37 | + |
| 38 | +# Start the development server |
| 39 | +npm run dev |
| 40 | +``` |
| 41 | + |
| 42 | +### Configure OpenAI API |
| 43 | + ```bash |
| 44 | + cp .env.example .env |
| 45 | + # Edit .env and add your OpenAI API key |
| 46 | + ``` |
| 47 | + |
| 48 | +**Open Motia Workbench**: |
| 49 | + Navigate to `http://localhost:3000` to interact with the chatbot |
| 50 | + |
| 51 | +## 🔧 Usage |
| 52 | + |
| 53 | +### Send a Chat Message |
| 54 | + |
| 55 | +**POST** `/chat` |
| 56 | + |
| 57 | +```json |
| 58 | +{ |
| 59 | + "message": "Hello, how are you?", |
| 60 | + "conversationId": "optional-conversation-id" // Optional: If not provided, a new conversation will be created |
| 61 | +} |
| 62 | +``` |
| 63 | + |
| 64 | +**Response:** |
| 65 | +```json |
| 66 | +{ |
| 67 | + "conversationId": "uuid-v4", |
| 68 | + "message": "Message received, AI is responding...", |
| 69 | + "status": "streaming" |
| 70 | +} |
| 71 | +``` |
| 72 | + |
| 73 | +The response will update as the AI processes the message, with possible status values: |
| 74 | +- `created`: Initial message state |
| 75 | +- `streaming`: AI is generating the response |
| 76 | +- `completed`: Response is complete with full message |
| 77 | + |
| 78 | +When completed, the response will contain the actual AI message instead of the processing message. |
| 79 | + |
| 80 | +### Real-time State Updates |
| 81 | + |
| 82 | +The conversation state stream provides live updates as the AI generates responses: |
| 83 | + |
| 84 | +- **User messages**: Immediately stored with `status: 'completed'` |
| 85 | +- **AI responses**: Start with `status: 'streaming'`, update in real-time, end with `status: 'completed'` |
| 86 | + |
| 87 | +## 🎯 Key Concepts Demonstrated |
| 88 | + |
| 89 | +### 1. **Streaming API Integration** |
| 90 | +```typescript |
| 91 | +const stream = await openai.chat.completions.create({ |
| 92 | + model: 'gpt-4o-mini', |
| 93 | + messages: [...], |
| 94 | + stream: true, // Enable streaming |
| 95 | +}) |
| 96 | + |
| 97 | +for await (const chunk of stream) { |
| 98 | + // Update state with each token |
| 99 | + await streams.conversation.set(conversationId, messageId, { |
| 100 | + message: fullResponse, |
| 101 | + status: 'streaming', |
| 102 | + // ... |
| 103 | + }) |
| 104 | +} |
| 105 | +``` |
| 106 | + |
| 107 | +### 2. **Real-time State Management** |
| 108 | +```typescript |
| 109 | +export const config: StreamConfig = { |
| 110 | + name: 'conversation', |
| 111 | + schema: z.object({ |
| 112 | + message: z.string(), |
| 113 | + from: z.enum(['user', 'assistant']), |
| 114 | + status: z.enum(['created', 'streaming', 'completed']), |
| 115 | + timestamp: z.string(), |
| 116 | + }), |
| 117 | + baseConfig: { storageType: 'default' }, |
| 118 | +} |
| 119 | +``` |
| 120 | + |
| 121 | +### 3. **Event-driven Flow** |
| 122 | +```typescript |
| 123 | +// API emits event |
| 124 | +await emit({ |
| 125 | + topic: 'chat-message', |
| 126 | + data: { message, conversationId, assistantMessageId }, |
| 127 | +}) |
| 128 | + |
| 129 | +// Event handler subscribes and processes |
| 130 | +export const config: EventConfig = { |
| 131 | + subscribes: ['chat-message'], |
| 132 | + // ... |
| 133 | +} |
| 134 | +``` |
| 135 | + |
| 136 | +## 🌟 Why This Example Matters |
| 137 | + |
| 138 | +This example showcases Motia's power in just **3 files**: |
| 139 | + |
| 140 | +- **Effortless streaming**: Real-time AI responses with automatic state updates |
| 141 | +- **Type-safe events**: End-to-end type safety from API to event handlers |
| 142 | +- **Built-in state management**: No external state libraries needed |
| 143 | +- **Scalable architecture**: Event-driven design that grows with your needs |
| 144 | + |
| 145 | +Perfect for demonstrating how Motia makes complex real-time applications simple and maintainable. |
| 146 | + |
| 147 | +## 🔑 Environment Variables |
| 148 | + |
| 149 | +- `OPENAI_API_KEY`: Your OpenAI API key (required) |
| 150 | +- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL (optional) |
| 151 | +- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (optional) |
| 152 | + |
| 153 | +## 📝 Notes |
| 154 | + |
| 155 | +- Azure OpenAI integration code is included but commented out for demo purposes |
| 156 | +- The example uses `gpt-4o-mini` model for cost-effective responses |
| 157 | +- All conversation data is stored in Motia's built-in state management |
0 commit comments