@@ -8,6 +8,49 @@ The intent of this project is to build and interact with a locally hosted LLM us
8
8
9
9
Below are steps to get the Chatbot and Document Manager running.
10
10
11
+ ## Quick Start
12
+
13
+ The fastest way to get started is using Docker Compose with LiteLLM:
14
+
15
+ ``` bash
16
+ # Clone the repository
17
+ git clone https://github.com/jasonacox/TinyLLM.git
18
+ cd TinyLLM/chatbot/litellm
19
+
20
+ # Edit the configuration files for your setup
21
+ nano compose.yaml # Configure your models and API keys
22
+ nano config.yaml # Set up LLM providers (OpenAI, local models, etc.)
23
+
24
+ # Launch the complete stack
25
+ docker compose up -d
26
+ ```
27
+
28
+ This will start:
29
+ - ** Chatbot** at http://localhost:5000
30
+ - ** LiteLLM Dashboard** at http://localhost:4000/ui
31
+ - ** PostgreSQL** database for usage tracking
32
+ - ** SearXNG** search engine at http://localhost:8080
33
+
34
+ ### Alternative: Docker Only
35
+
36
+ If you prefer to run just the chatbot with a local LLM:
37
+
38
+ ``` bash
39
+ # Create the configuration directory
40
+ mkdir -p .tinyllm
41
+
42
+ # Run with your local LLM endpoint
43
+ docker run -d \
44
+ -p 5000:5000 \
45
+ -e OPENAI_API_BASE=" http://localhost:8000/v1" \
46
+ -e OPENAI_API_KEY=" your-api-key" \
47
+ -v $PWD /.tinyllm:/app/.tinyllm \
48
+ --name chatbot \
49
+ jasonacox/chatbot
50
+ ```
51
+
52
+ Visit http://localhost:5000 to start chatting!
53
+
11
54
## Chatbot
12
55
13
56
The Chatbot can be launched as a Docker container or via command line.
@@ -42,12 +85,17 @@ Below are the main environment variables you can set to configure the TinyLLM Ch
42
85
| ` PROMPT_RO ` | false | Enable read-only prompts |
43
86
| ` SEARXNG ` | http://localhost:8080 | SearxNG URL for web search |
44
87
| ` WEB_SEARCH ` | false | Enable web search for all queries |
88
+ | ` IMAGE_PROVIDER ` | swarmui | Image generation provider (swarmui or openai) |
45
89
| ` SWARMUI ` | http://localhost:7801 | SwarmUI host URL for image generation |
46
- | ` IMAGE_MODEL ` | OfficialStableDiffusion/sd_xl_base_1.0 | Image model to use |
47
- | ` IMAGE_CFGSCALE ` | 7.5 | CFG scale for image generation |
48
- | ` IMAGE_STEPS ` | 20 | Steps for image generation |
49
- | ` IMAGE_SEED ` | -1 | Seed for image generation |
90
+ | ` IMAGE_MODEL ` | OfficialStableDiffusion/sd_xl_base_1.0 | SwarmUI image model to use |
91
+ | ` IMAGE_CFGSCALE ` | 7.5 | CFG scale for SwarmUI image generation |
92
+ | ` IMAGE_STEPS ` | 20 | Steps for SwarmUI image generation |
93
+ | ` IMAGE_SEED ` | -1 | Seed for SwarmUI image generation |
50
94
| ` IMAGE_TIMEOUT ` | 300 | Timeout for image generation (seconds) |
95
+ | ` OPENAI_IMAGE_MODEL ` | dall-e-3 | OpenAI image model (dall-e-2 or dall-e-3) |
96
+ | ` OPENAI_IMAGE_SIZE ` | 1024x1024 | OpenAI image size |
97
+ | ` OPENAI_IMAGE_QUALITY ` | standard | OpenAI image quality (standard or hd) |
98
+ | ` OPENAI_IMAGE_STYLE ` | vivid | OpenAI image style (vivid or natural) |
51
99
| ` IMAGE_WIDTH ` | 1024 | Image width |
52
100
| ` IMAGE_HEIGHT ` | 1024 | Image height |
53
101
| ` REPEAT_WINDOW ` | 200 | Window size for repetition detection |
@@ -206,10 +254,36 @@ Some RAG (Retrieval Augmented Generation) features including:
206
254
/model [LLM_name] # Display or select LLM model to use (dialogue popup)
207
255
/search [opt:number] [prompt] # Search the web to help answer the prompt
208
256
/intent [on|off] # Activate intent router to automatically run above functions
257
+ /image [prompt] # Generate an image based on the prompt
209
258
```
210
259
211
260
See the [ rag] ( ../rag/ ) for more details about RAG.
212
261
262
+ ### Image Generation
263
+
264
+ The chatbot supports image generation through two providers:
265
+
266
+ 1 . ** SwarmUI** (default) - Local image generation using Stable Diffusion models
267
+ 2 . ** OpenAI** - Cloud-based image generation using DALL-E models
268
+
269
+ #### SwarmUI Configuration
270
+
271
+ ``` bash
272
+ export IMAGE_PROVIDER=" swarmui"
273
+ export SWARMUI=" http://localhost:7801"
274
+ export IMAGE_MODEL=" OfficialStableDiffusion/sd_xl_base_1.0"
275
+ ```
276
+
277
+ #### OpenAI Configuration
278
+
279
+ ``` bash
280
+ export IMAGE_PROVIDER=" openai"
281
+ export OPENAI_API_KEY=" your-openai-api-key"
282
+ export OPENAI_IMAGE_MODEL=" dall-e-3"
283
+ ```
284
+
285
+ See [ IMAGE_CONFIG.md] ( IMAGE_CONFIG.md ) for complete configuration options.
286
+
213
287
### Example Session
214
288
215
289
The examples below use a Llama 2 7B model served up with the OpenAI API compatible [ llmserver] ( https://github.com/jasonacox/TinyLLM/tree/main/llmserver ) on an Intel i5 system with an Nvidia GeForce GTX 1060 GPU.
0 commit comments