Skip to content

blepw/Local-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local-AI

Automate the process of using Ollama AI models with a local HTML / CSS / JavaScript web interface and a Python backend. The system detects hardware, selects the best model, installs it if missing, and starts a local web server.


Goal of the project

  • Automate Ollama AI model usage
  • Select the best model based on hardware
  • Provide a local web interface
  • Support Windows and Linux
  • One-command startup

How It Works

  • User runs the startup script
  • Prerequisites are checked
  • Hardware information is collected
  • Best model is selected using model_config.json
  • User may override the model
  • Ollama installs the model if missing
  • Python web server starts
  • Web UI becomes available locally (localhost / LAN)
  • Logs server errors and shows output in server_errors.log

CLI Inteface


Web Interface (Pc)


Execution

Windows:
start.bat

Linux:
bash start.sh

Project Structure

Local-AI/
├── start.sh
├── start.bat
├── server.py
├── model_config.json
│── index.html
│── style.css
└── script.js

To-do

  • Catch and fix , Error: listen tcp 127.0.0.1:11434: bind: address already in use
  • Make the UI , phone friendly
  • Add more AI models in model_config.json file

About

Free , Local AI for everyone

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published