Skip to content

OpenWebUI function to log your inputs and completions to Weave (by Weights & Biases) for LLMOps / observability.

License

Notifications You must be signed in to change notification settings

m-rgba/openwebui-weave

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenWebUI + Weave (by Weights & Biases) Input/Completion Logging

OpenWebUI function to log your inputs and completions to Weave (by Weights & Biases) for LLMOps / observability.

Implementation notes:

  • OpenWebUI (open-source LLM web UI).
  • Weave (LLMOps, logging, and observability).
    • Uses manual call tracking to log inputs and completions to your Weave project.
    • Triggers the call tracking using OpenWebUI's filter functions before (inlet) and after (outlet) an LLM execution.

Installation

weave-openwebui.mp4

1. Setup your Weave project

2. Run your OpenWebUI interface and install the function

  • Run your OpenWebUI Docker container (read more), simplest way is to use the following command:
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • Visit the OpenWebUI Functions page at: http://localhost:3000/workspace/functions (assuming you're local and used the default port)
  • Open filter.py from this repo, copy content, paste the following code into the code field and click "Install".
  • Select the gear icon next to the function and set the wandb_api_key and wandb_project_name you copied in the first step.
    • Your API key can be found at: https://wandb.ai/settings.
    • Make sure you use username/project_name format for wandb_project_name.
  • Enable the function and set as "Global" to enable logging for all chat instances.

3. Profit?

Note

  • Set the priority of the function to the highest value to have it fire (log) after any other transformations you make to your context.
  • Token counting is currently supported for OpenAI models using tiktoken. Non-OpenAI models will default to gpt-4os token count (which will probably be close to actual token usage).

About

OpenWebUI function to log your inputs and completions to Weave (by Weights & Biases) for LLMOps / observability.

Topics

Resources

License

Stars

Watchers

Forks

Languages