This is an unofficial project and is not affiliated with or endorsed by LM Studio.
A modern, open-source web interface for interacting with Local LLMs via LM Studio over your local network. Built with React, TypeScript, FastAPI, and Docker. This is a personal project and not an official LM Studio project.
Download LM Studio
This project was primarily vibe-coded using generative AI through Cursor IDE. It came together through rapid experimentation, quick iterations, and a healthy dose of curiosity. This process reflects my ongoing work exploring how AI can accelerate real-world problem solving — turning rough ideas into working solutions in a fraction of the usual time.
- Modern Web UI - Clean, responsive interface with light/dark theme support
- LM Studio Integration - Seamless connection to your local LM Studio server
- Persona Management - Create and manage custom AI personas with system prompts
- Chat History - Persistent conversation management with auto-generated chat names
- Context-Aware Conversations - Configurable message context for continuing chats
- Model Refresh - Dynamically fetch available models from LM Studio
- Copy to Clipboard - Easy response copying with visual feedback
- Docker Ready - One-command deployment with Docker Compose
- LAN Access - Accessible across your local network
- Fast & Lightweight - Built with modern web technologies



I built this application to make better use of my homelab setup. My older gaming laptop has a solid GPU and runs LM Studio quickly and reliably, but it doesn’t have the spare resources to also handle Docker and everything else I run on it. Instead of using solutions like Ollama or OpenWebUI, I chose to dedicate the laptop to running LM Studio alone.
Meanwhile, my homelab has plenty of capacity for a VM running Docker. This application bridges the two: it lets me run LM Studio on the GPU-equipped laptop while hosting a clean, Dockerized interface on my LAN.
If you have a similar setup, you can do the same—fast local inference on your GPU machine, with a lightweight and accessible interface served from your homelab.
Looking for a more lightweight solution? Try YorkieDev's LM Studio Chat WebUI(unofficial)
- React 18 with TypeScript
- Vite for fast development and building
- Tailwind CSS for styling
- Lucide React for icons
- Axios for API communication
- FastAPI with Python 3.11
- SQLModel (SQLAlchemy + Pydantic) for database operations
- SQLite for data persistence
- httpx for LM Studio API communication
- Uvicorn ASGI server
- Docker & Docker Compose for containerization
- Nginx for frontend serving
- SQLite with persistent volumes
- Docker and Docker Compose installed
- LM Studio running on your machine or network
git clone <repository-url>
cd docker-simple-LMStudio-web-ui
cd infrastructure
cp env.example .env
# From the project root
make up
# Or manually:
cd infrastructure
docker compose up -d --build
Open your browser and navigate to:
- Web UI: http://localhost:5173
- API Docs: http://localhost:8001/docs
- Open the web UI and click the settings (⚙️) icon
- Set your LM Studio Base URL (e.g.,
http://192.168.1.10:1234/v1
) - Click "Refresh Models" to load available models
- Save your settings
The application defaults to http://192.168.4.70:1234/v1
if no custom URL is set.
The interface now features a modern chat layout with:
- Left Sidebar: Chat history panel showing all your conversations
- Main Area: Current conversation with message bubbles
- Input Area: Message input at the bottom
- Click the "+" button in the chat history panel
- Select a model from the dropdown
- Optionally choose a persona
- Type your message and click "Send"
- The chat will be automatically saved with an auto-generated name
- Click on any chat in the left sidebar to load it
- The conversation history will be displayed
- Continue typing - previous context will be included automatically
- Configure context length in Settings (default: 5 previous messages)
- Auto-Generated Names: Chats are named based on your first message
- Delete Chats: Click the trash icon next to any chat to delete it
- Persistent Storage: All conversations are saved and survive app restarts
- Open Settings (⚙️ icon)
- Scroll to "Persona Manager"
- Click "Add Persona" to create custom AI personalities
- Edit or delete existing personas as needed
- System Message Priority: Persona system messages are always sent first
- Open Settings (⚙️ icon)
- Set "Number of Previous Messages to Send as Context" (0-20)
- Default is 5 messages for optimal performance
- Set to 0 to disable context (each message is independent)
Click the copy icon next to any response to copy it to your clipboard. Works in both light and dark themes.
Click the sun/moon icon in the top-right to switch between light and dark themes. Your preference is saved automatically.
The application is configured to be accessible across your local network:
- Frontend: Available on all network interfaces (0.0.0.0:5173)
- Backend: Available on all network interfaces (0.0.0.0:8001)
- CORS: Configured to allow requests from any origin
cd backend
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8001
Important: The backend server must be run from the backend
directory, not the project root.
cd frontend
npm install
npm run dev
make up # Start the application
make down # Stop the application
make logs # View logs
make rebuild # Rebuild and restart
make fmt # Format Python code
make lint # Lint Python code
make test # Run tests
GET /api/healthz
- Health checkGET /api/settings
- Get current settings (includes context message count)PUT /api/settings
- Update settings (LM Studio URL and context count)
GET /api/models
- List available modelsPOST /api/models/refresh
- Refresh model list
GET /api/personas
- List personasPOST /api/personas
- Create personaPUT /api/personas/{id}
- Update personaDELETE /api/personas/{id}
- Delete persona
GET /api/chats
- List all chatsGET /api/chats/{id}
- Get specific chat with messagesDELETE /api/chats/{id}
- Delete chat and all messagesPOST /api/chat
- Send chat message (supports chat_id for continuing conversations)
-
"Failed to save settings" or "Failed to refresh models" errors:
- Ensure the backend server is running:
cd backend && python -m uvicorn app.main:app --host 0.0.0.0 --port 8001
- Check that you're running the server from the
backend
directory, not the project root - Verify all dependencies are installed:
pip install -r requirements.txt
- Check Docker logs:
docker compose logs backend
for detailed error messages
- Ensure the backend server is running:
-
"ModuleNotFoundError: No module named 'app'":
- Make sure you're running uvicorn from the
backend
directory - Install missing dependencies:
pip install fastapi uvicorn sqlmodel httpx pydantic
- Make sure you're running uvicorn from the
- Check LM Studio URL: Ensure the URL in settings is correct
- Verify LM Studio is running: Make sure LM Studio is active and serving on the specified port
- Network connectivity: For LAN access, ensure both devices are on the same network
- Firewall: Check that ports 1234 (LM Studio) and 8001/5173 (Web UI) are not blocked
- Refresh Models: Use the "Refresh Models" button in settings
- Check LM Studio: Ensure models are loaded in LM Studio
- API Compatibility: Verify LM Studio is using OpenAI-compatible API format
- Port conflicts: Change ports in
.env
if 8001 or 5173 are in use - Permission issues: Ensure Docker has proper permissions
- Volume issues: Check that the database volume is properly mounted
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
- Python: Black + Ruff (configured in
pyproject.toml
) - TypeScript: ESLint + Prettier
- Commits: Conventional Commits format
This project is licensed under the MIT License - see the LICENSE file for details.
See CHANGELOG.md for a list of changes and version history.
- Issues: Report bugs and request features on GitHub Issues
- Documentation: Check the SETUP.md for detailed setup instructions
This application is designed for LAN-only use and does not include authentication or external network access by default. For production use, consider adding appropriate security measures.
This project is provided "as is" and "as available", without warranty of any kind, express or implied.
Use of this software is at your own risk. The author(s) are not responsible for any damages, data loss, or issues that may arise from using, modifying, or distributing this code.
By using this repository, you agree that you assume all responsibility for any outcomes that result from its use.