A Slack app that lets end-users chat with AI, offering flexible model selection powered by LiteLLM. Pronounced the same as "Colombo". Forked from seratch/ChatGPT-in-Slack.
Below are quick setup instructions for Collmbo using some popular AI models. These are representative examples, and Collmbo supports many other models through LiteLLM.
$ cat env
# Create a new Slack app using manifest.yml and grab the app-level token
SLACK_APP_TOKEN=xapp-1-...
# Install the app into your workspace to grab this token
SLACK_BOT_TOKEN=xoxb-...
# Visit https://platform.openai.com/api-keys for this token
OPENAI_API_KEY=sk-...
# Specify a model name supported by LiteLLM
LITELLM_MODEL=gpt-4o
$ docker run -it --env-file ./env ghcr.io/iwamot/collmbo:latest-slim
$ cat env
SLACK_APP_TOKEN=...
SLACK_BOT_TOKEN=...
AZURE_API_KEY=...
AZURE_API_BASE=...
AZURE_API_VERSION=...
LITELLM_MODEL=azure/<your_deployment_name>
LITELLM_MODEL_TYPE=azure/gpt-4-0613
$ docker run -it --env-file ./env ghcr.io/iwamot/collmbo:latest-slim
$ cat env
SLACK_APP_TOKEN=...
SLACK_BOT_TOKEN=...
LITELLM_MODEL=bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
AWS_REGION_NAME=us-west-2 # Optional: Set if Bedrock Claude region is different from application region
# Recommend using IAM roles for authentication
$ docker run -it --env-file ./env ghcr.io/iwamot/collmbo:latest-full
Note: full
flavor images include boto3.
- Flexible model selection
- Redaction (
REDACTION_ENABLED=true
) - Image reading (
IMAGE_FILE_ACCESS_ENABLED=true
, for supported models only) - Tools / Function calling (
LITELLM_TOOLS_MODULE_NAME=tests.tools_example
, for supported models only) - Custom callbacks (
LITELLM_CALLBACK_MODULE_NAME=tests.callback_example
)
We welcome contributions to Collmbo! If you have any feature requests, bug reports, or other issues, please feel free to open an issue on this repository. Your feedback and contributions help make Collmbo better for everyone.
This project is licensed under the MIT License. See the LICENSE file for details.