From a9a44917344202db390a29ff49504c41bd65ed7d Mon Sep 17 00:00:00 2001 From: friendshipkim Date: Mon, 17 Jun 2024 17:21:01 -0400 Subject: [PATCH] add instructions for open-source models --- README.md | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/README.md b/README.md index 3a08c7d..2104341 100644 --- a/README.md +++ b/README.md @@ -155,6 +155,39 @@ html_theme = 'sphinx_rtd_theme' 7. Build the documentation, from `docs` directory, run: `sphinx-build -b html . _build` +## Open-source models + +Using open-source models is possible through LiteLLM with Ollama. Ollama allows users to run language models locally on their machines, and LiteLLM translates OpenAI-format inputs to local models' endpoints. To use open-source models as Agent-E backbone, follow the steps below: + +1. Install LiteLLM + ```bash + pip install 'litellm[proxy]' + ``` +2. Install Ollama + * For Mac and Windows, download [Ollama](https://ollama.com/download). + * For Linux: + ```bash + curl -fsSL https://ollama.com/install.sh | sh + ``` +3. Pull Ollama models + Before you can use a model, you need to download it from the library. The list of available models is [here](https://ollama.com/library). Here, we use Mistral v0.3: + ```bash + ollama pull mistral:v0.3 + ``` +4. Run LiteLLM + To run the downloaded model with LiteLLM as a proxy, run: + ```bash + litellm --model ollama_chat/mistral:v0.3 + ``` +5. Configure model in Autogen + Configure the `.env` file as follows. Note that the model name and API keys are not needed since the local model is already running. + ```bash + AUTOGEN_MODEL_NAME=NotRequired + AUTOGEN_MODEL_API_KEY=NotRequired + AUTOGEN_MODEL_BASE_URL=http://0.0.0.0:400 + ``` + + ## TODO - Action verification - Responding from every skill with changes that took place in the DOM (Mutation Observers) so that the LLM can judge whether the skill did execute properly or not