This guide explains how to add a new model to the Kolosal AI application by creating a .json
configuration file. Follow the steps below:
- Locate an existing
.json
file in the models directory, such asgemma-2-2b.json
. - Duplicate the file and rename it to match your new model's name, e.g.,
new-model-name.json
.
cp gemma-2-2b.json new-model-name.json
- Open the newly copied
.json
file in your preferred text editor. - Update the following fields:
Provide the name of the model and its author:
"name": "New Model Name",
"author": "Author Name",
Each precision (Full Precision, 8-bit Quantized, 4-bit Quantized) can be configured. Update the path
and downloadLink
fields as follows:
path
: Set the path where the model is located or will be downloaded to on your disk.downloadLink
: Provide the URL from which Kolosal AI can download the model.
If a specific precision is not available, leave the path
or downloadLink
fields empty or leave them as they are. Do not remove the precision section.
Example:
"fullPrecision": {
"type": "Full Precision",
"path": "models/new-model-name/fp16/new-model-fp16.gguf",
"downloadLink": "https://huggingface.co/kolosal/new-model/resolve/main/new-model-fp16.gguf",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
},
"quantized8Bit": {
"type": "8-bit Quantized",
"path": "",
"downloadLink": "",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
},
"quantized4Bit": {
"type": "4-bit Quantized",
"path": "models/new-model-name/int4/new-model-Q4_K_M.gguf",
"downloadLink": "https://huggingface.co/kolosal/new-model/resolve/main/new-model-Q4_K_M.gguf",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
}
- After making the necessary changes, save the
.json
file.
- Start the Kolosal AI application and ensure the new model appears in the model selection menu.
- Check that the model can be downloaded and loaded without issues.
Here is a complete example JSON configuration for reference:
{
"name": "New Model Name",
"author": "Author Name",
"fullPrecision": {
"type": "Full Precision",
"path": "models/new-model-name/fp16/new-model-fp16.gguf",
"downloadLink": "https://huggingface.co/kolosal/new-model/resolve/main/new-model-fp16.gguf",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
},
"quantized8Bit": {
"type": "8-bit Quantized",
"path": "",
"downloadLink": "",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
},
"quantized4Bit": {
"type": "4-bit Quantized",
"path": "models/new-model-name/int4/new-model-Q4_K_M.gguf",
"downloadLink": "https://huggingface.co/kolosal/new-model/resolve/main/new-model-Q4_K_M.gguf",
"isDownloaded": false,
"downloadProgress": 0.0,
"lastSelected": 0
}
}
- Ensure that all file paths and download links are correct to avoid errors during model download or loading.
- The
isDownloaded
anddownloadProgress
fields should remain asfalse
and0.0
, respectively. These will be updated automatically by Kolosal AI. - Keep your JSON file well-formatted to avoid parsing errors.
- You can now open your compiled Kolosal AI application without have to recompile it.
For additional support, contact the Kolosal AI team or refer to the official documentation.