Skip to content

Commit d7228ba

Browse files
authored
Merge pull request #148 from rohitg00/main
Add Streaming AI chatbot using motia
2 parents b235a60 + 8b49017 commit d7228ba

File tree

8 files changed

+8429
-0
lines changed

8 files changed

+8429
-0
lines changed

streaming-ai-chatbot/.env.example

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
# OpenAI Configuration - Required for AI responses
2+
OPENAI_API_KEY=your-openai-api-key-here
3+
4+
# Azure OpenAI Configuration (commented out for demo)
5+
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
6+
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/

streaming-ai-chatbot/.gitignore

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# Dependencies
2+
node_modules/
3+
npm-debug.log*
4+
yarn-debug.log*
5+
yarn-error.log*
6+
7+
# Environment variables
8+
.env
9+
.env.local
10+
.env.development.local
11+
.env.test.local
12+
.env.production.local
13+
14+
# Build outputs
15+
dist/
16+
build/
17+
.motia/
18+
.mermaid/
19+
20+
# IDE
21+
.vscode/
22+
.idea/
23+
*.swp
24+
*.swo
25+
26+
# OS
27+
.DS_Store
28+
Thumbs.db
29+
30+
# Logs
31+
logs/
32+
*.log

streaming-ai-chatbot/README.md

Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
# Streaming AI Chatbot
2+
3+
A minimal example demonstrating **real-time AI streaming** and **conversation state management** using the Motia framework.
4+
![streaming-ai-chatbot](docs/images/streaming-ai-chatbot.gif)
5+
6+
## 🚀 Features
7+
8+
- **Real-time AI Streaming**: Token-by-token response generation using OpenAI's streaming API
9+
- **Live State Management**: Conversation state updates in real-time with message history
10+
- **Event-driven Architecture**: Clean API → Event → Streaming Response flow
11+
- **Minimal Complexity**: Maximum impact with just 3 core files
12+
13+
## 📁 Architecture
14+
15+
```
16+
streaming-ai-chatbot/
17+
├── steps/
18+
│ ├── conversation.stream.ts # Real-time conversation state
19+
│ ├── chat-api.step.ts # Simple chat API endpoint
20+
│ └── ai-response.step.ts # Streaming AI response handler
21+
├── package.json # Dependencies
22+
├── tsconfig.json # TypeScript configuration
23+
└── README.md # This file
24+
```
25+
26+
## 🛠️ Setup
27+
28+
### Installation & Setup
29+
30+
```bash
31+
# Clone the repository
32+
git clone https://github.com/patchy631/ai-engineering-hub.git
33+
cd streaming-ai-chatbot
34+
35+
# Install dependencies
36+
npm install
37+
38+
# Start the development server
39+
npm run dev
40+
```
41+
42+
### Configure OpenAI API
43+
```bash
44+
cp .env.example .env
45+
# Edit .env and add your OpenAI API key
46+
```
47+
48+
**Open Motia Workbench**:
49+
Navigate to `http://localhost:3000` to interact with the chatbot
50+
51+
## 🔧 Usage
52+
53+
### Send a Chat Message
54+
55+
**POST** `/chat`
56+
57+
```json
58+
{
59+
"message": "Hello, how are you?",
60+
"conversationId": "optional-conversation-id" // Optional: If not provided, a new conversation will be created
61+
}
62+
```
63+
64+
**Response:**
65+
```json
66+
{
67+
"conversationId": "uuid-v4",
68+
"message": "Message received, AI is responding...",
69+
"status": "streaming"
70+
}
71+
```
72+
73+
The response will update as the AI processes the message, with possible status values:
74+
- `created`: Initial message state
75+
- `streaming`: AI is generating the response
76+
- `completed`: Response is complete with full message
77+
78+
When completed, the response will contain the actual AI message instead of the processing message.
79+
80+
### Real-time State Updates
81+
82+
The conversation state stream provides live updates as the AI generates responses:
83+
84+
- **User messages**: Immediately stored with `status: 'completed'`
85+
- **AI responses**: Start with `status: 'streaming'`, update in real-time, end with `status: 'completed'`
86+
87+
## 🎯 Key Concepts Demonstrated
88+
89+
### 1. **Streaming API Integration**
90+
```typescript
91+
const stream = await openai.chat.completions.create({
92+
model: 'gpt-4o-mini',
93+
messages: [...],
94+
stream: true, // Enable streaming
95+
})
96+
97+
for await (const chunk of stream) {
98+
// Update state with each token
99+
await streams.conversation.set(conversationId, messageId, {
100+
message: fullResponse,
101+
status: 'streaming',
102+
// ...
103+
})
104+
}
105+
```
106+
107+
### 2. **Real-time State Management**
108+
```typescript
109+
export const config: StreamConfig = {
110+
name: 'conversation',
111+
schema: z.object({
112+
message: z.string(),
113+
from: z.enum(['user', 'assistant']),
114+
status: z.enum(['created', 'streaming', 'completed']),
115+
timestamp: z.string(),
116+
}),
117+
baseConfig: { storageType: 'default' },
118+
}
119+
```
120+
121+
### 3. **Event-driven Flow**
122+
```typescript
123+
// API emits event
124+
await emit({
125+
topic: 'chat-message',
126+
data: { message, conversationId, assistantMessageId },
127+
})
128+
129+
// Event handler subscribes and processes
130+
export const config: EventConfig = {
131+
subscribes: ['chat-message'],
132+
// ...
133+
}
134+
```
135+
136+
## 🌟 Why This Example Matters
137+
138+
This example showcases Motia's power in just **3 files**:
139+
140+
- **Effortless streaming**: Real-time AI responses with automatic state updates
141+
- **Type-safe events**: End-to-end type safety from API to event handlers
142+
- **Built-in state management**: No external state libraries needed
143+
- **Scalable architecture**: Event-driven design that grows with your needs
144+
145+
Perfect for demonstrating how Motia makes complex real-time applications simple and maintainable.
146+
147+
## 🔑 Environment Variables
148+
149+
- `OPENAI_API_KEY`: Your OpenAI API key (required)
150+
- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL (optional)
151+
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (optional)
152+
153+
## 📝 Notes
154+
155+
- Azure OpenAI integration code is included but commented out for demo purposes
156+
- The example uses `gpt-4o-mini` model for cost-effective responses
157+
- All conversation data is stored in Motia's built-in state management
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"chat": {
3+
"steps/chat-api.step.ts": {
4+
"x": 300.82955664582096,
5+
"y": 61.25983698969445
6+
},
7+
"steps/ai-response.step.ts": {
8+
"x": 305.23563098529075,
9+
"y": 339.735086429314
10+
}
11+
}
12+
}

0 commit comments

Comments
 (0)