Plugin for LLM providing access to multiple free models using the samba API.
Install this plugin in the same environment as LLM:
llm install llm-samba # or if not published
llm install -e ./path/to/llm-sambaFirst, obtain an API key signing up for the samba API
Configure the key using the llm keys set samba command:
llm keys set samba
# Paste your API key here, should be like the following:
# 8n1610xc-996c-4f1e-b63b-857d73b7eba3You can also set it via environment variable:
export SAMBA_API_KEY="your-api-key-here"You can now access the following free models:
- Meta-Llama-3.1-405B-Instruct
- Meta-Llama-3.1-70B-Instruct
- Meta-Llama-3.1-8B-Instruct
- Meta-Llama-3.2-1B-Instruct
- Meta-Llama-3.2-3B-Instruct
- Meta-Llama-3.3-70B-Instruct
- Meta-Llama-Guard-3-8B
- Qwen2.5-72B-Instruct
- Qwen2.5-Coder-32B-Instruct
- QwQ-32B-Preview
Run llm samba models to see the list of available models.
To run a prompt through a specific model:
llm -m Meta-Llama-3.1-405B-Instruct 'What is the meaning of life, the universe, and everything?'To start an interactive chat session:
llm chat -m Meta-Llama-3.1-405B-InstructExample chat session:
Chatting with Meta-Llama-3.1-405B-Instruct
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Tell me a joke about programming
To use a system prompt to give the model specific instructions:
cat example.py | llm -m Meta-Llama-3.1-405B-Instruct -s 'explain this code in a humorous way'In $HOME/.config/io.datasette.llm/aliases.json add an alias for your preferred model. The key is the alias, the value is the model name here is some example:
{
"llama3.1-405": "Meta-Llama-3.1-405B-Instruct",
"llama3.3-70": "Meta-Llama-3.3-70B-Instruct",
"llama3.1-8": "Meta-Llama-3.1-8B-Instruct",
"llama3.2-1": "Meta-Llama-3.2-1B-Instruct",
"llama3.2-3": "Meta-Llama-3.2-3B-Instruct",
"llama3.3-70": "Meta-Llama-3.3-70B-Instruct",
"llama-guard": "Meta-Llama-Guard-3-8B",
"qwen": "Qwen2.5-72B-Instruct",
"qwen-coder": "Qwen2.5-Coder-32B-Instruct",
"qwq": "QwQ-32B-Preview"
}
The models accept the following options, using -o name value syntax:
-o temperature 0.7: The sampling temperature, between 0 and 1. Higher values like 0.8 increase randomness, while lower values like 0.2 make the output more focused and deterministic.-o max_tokens 100: Maximum number of tokens to generate in the completion.
Example with options:
llm -m Meta-Llama-3.1-405B-Instruct -o temperature 0.2 -o max_tokens 50 'Write a haiku about AI'To set up this plugin locally, first checkout the code. Then create a new virtual environment:
git clone https://github.com/Tatarotus/llm-samba.git
cd samba
python3 -m venv venv
source venv/bin/activateNow install the dependencies and test dependencies:
pip install -e '.[test]'To run the tests:
pytestList available models:
llm samba modelsCheck your current configuration:
llm samba configThis plugin uses the openai API. For more information about the API, see:
Contributions are welcome! Please feel free to submit a Pull Request.
Apache License 2.0
The following models are available:
Meta-Llama-3.1-405B-InstructMeta-Llama-3.1-70B-InstructMeta-Llama-3.1-8B-InstructMeta-Llama-3.2-1B-InstructMeta-Llama-3.2-3B-InstructMeta-Llama-3.3-70B-InstructMeta-Llama-Guard-3-8BQwen2.5-72B-InstructQwen2.5-Coder-32B-InstructQwQ-32B-Preview