Automate the process of using Ollama AI models with a local HTML / CSS / JavaScript web interface and a Python backend. The system detects hardware, selects the best model, installs it if missing, and starts a local web server.
- Automate Ollama AI model usage
- Select the best model based on hardware
- Provide a local web interface
- Support Windows and Linux
- One-command startup
- User runs the startup script
- Prerequisites are checked
- Hardware information is collected
- Best model is selected using
model_config.json - User may override the model
- Ollama installs the model if missing
- Python web server starts
- Web UI becomes available locally (localhost / LAN)
- Logs server errors and shows output in
server_errors.log
Windows:
start.bat
Linux:
bash start.sh
Local-AI/
├── start.sh
├── start.bat
├── server.py
├── model_config.json
│── index.html
│── style.css
└── script.js
- Catch and fix , Error: listen tcp 127.0.0.1:11434: bind: address already in use
- Make the UI , phone friendly
- Add more AI models in
model_config.jsonfile


