Download Ollama: ollama.com/download
Download a Model: ollama.com/library
After downloading, run the model locally:
ollama run [model_name_here]Note: This project uses llama3.1. If you want to use a different model, you need to change the model in server.py
Clone the project
git clone https://github.com/KashyapTan/LLM-Query-UI.gitInstall React dependencies
npm installInstall Server requirements
pip install -r requirements.txtStart local host
npm run devStart python server
python server.py