A powerful local alternative to ChatGPT built with Ollama and Chainlit, allowing you to run large language models completely on your own hardware with a clean chat interface.
- 🚀 100% Local: All processing happens on your machine with no data sent to external servers
- 🖼️ Vision Support: Upload and analyze images with multimodal LLMs
- 💬 Chat Interface: Clean, interactive UI powered by Chainlit
- 🔄 Streaming Responses: Real-time token streaming for a natural chat feel
- 🧠 Conversation Memory: Maintains context throughout your conversation
- ⚡ Fast Setup: Get running in minutes with simple installation steps
- 🛡️ Error Handling: Robust error management for a smooth user experience
- Python 3.9+
- Ollama installed and running
- At least 8GB RAM (16GB+ recommended for larger models)
- CUDA-compatible GPU recommended for best performance
-
Start the Chainlit app:
chainlit run local_chatgpt.py
-
Open your browser and navigate to http://localhost:8000
-
Start chatting with your local AI assistant!
You can customize the application by editing the following variables at the top of local_chatgpt.py:
MODEL_NAME = "granite3.2-vision" # Change to your preferred model
SYSTEM_PROMPT = "You are a helpful assistant with vision capabilities. You can see and understand images."Available models depend on what you've pulled with Ollama. Some popular options:
llama3.2:70b-vision- Best overall performance with vision capabilitiesgemma:7b- Smaller, faster model with good performancellama3:8b- Good balance of speed and capabilitiesmixtral:8x7b- Excellent reasoning capabilities
To use images in your conversation:
- Click the pin sign in the chat interface
- Select an image file
- Type your question about the image
- The model will analyze and respond to both your text and the image
Model loading errors:
- Ensure Ollama is running (
ollama serve) - Verify the model is downloaded (
ollama list) - Check your RAM/VRAM availability
Slow responses:
- Try a smaller model like
gemma:7borllama3:8b - Ensure your GPU drivers are up-to-date if using CUDA
This project is licensed under the MIT License - see the LICENSE file for details.
