From 87a37aedd14b414f1b4fd503bdb295c815e587ee Mon Sep 17 00:00:00 2001 From: Bastian Fredriksson Date: Sat, 10 Feb 2024 11:22:34 +0100 Subject: [PATCH] docs: ollama as self-hosted llm Use ollama/ollama as an example of a self-hosted LLM. --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b01c885..a886fc2 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ If you like it, please add a ⭐. 1. Create a configuration file called `.config/fish-ai.ini`. -If you use [a self-hosted LLM](https://github.com/getumbrel/llama-gpt), e.g. `code-llama-13b-chat.gguf`: +If you use [a self-hosted LLM](https://github.com/ollama/ollama), e.g. [`codellama`](https://ollama.com/library/codellama): ```ini [fish-ai] @@ -22,8 +22,8 @@ configuraton = self-hosted [self-hosted] provider = self-hosted -server = https://your-server/v1 -model = code-llama-13b-chat.gguf +server = http://localhost:11434/v1 +model = codellama ``` If you use [OpenAI](https://platform.openai.com/login):