Skip to content

Commit 2f1635b

Browse files
mudlergithub-actions[bot]
authored andcommitted
chore(model gallery): 🤖 add new models via gallery agent
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
1 parent a6c9789 commit 2f1635b

File tree

1 file changed

+24
-0
lines changed

1 file changed

+24
-0
lines changed

gallery/index.yaml

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22950,3 +22950,27 @@
2295022950
- filename: apollo-astralis-4b.i1-Q4_K_M.gguf
2295122951
sha256: 94e1d371420b03710fc7de030c1c06e75a356d9388210a134ee2adb4792a2626
2295222952
uri: huggingface://mradermacher/apollo-astralis-4b-i1-GGUF/apollo-astralis-4b.i1-Q4_K_M.gguf
22953+
- !!merge <<: *hermes-2-pro-mistral
22954+
name: "hermes-2.5-mistral-7b-i1"
22955+
urls:
22956+
- https://huggingface.co/mradermacher/Hermes-2.5-Mistral-7B-i1-GGUF
22957+
description: |
22958+
**Hermes-2.5-Mistral-7B** is a high-performance, instruction-tuned language model based on the Mistral-7B-v0.1 architecture. Developed by Teknium and fine-tuned on a large dataset of high-quality, GPT-4-generated and curated open-source data, it excels in reasoning, coding, and conversational abilities.
22959+
22960+
Key features:
22961+
- **Base Model**: Mistral-7B-v0.1
22962+
- **Training Data**: ~1 million instruction-following examples, including synthetic data from GPT-4 and diverse open datasets
22963+
- **Prompt Format**: ChatML (compatible with OpenAI API and modern LLM interfaces)
22964+
- **Performance**: Outperforms most Mistral-based fine-tunes across benchmarks—surpassing GPT4All (73.12 avg), AGIEval (43.07%), and TruthfulQA (53.04%)—with strong gains in code evaluation (50.7% Pass@1 on HumanEval)
22965+
- **Use Case**: Ideal for chat, coding assistance, and complex reasoning tasks
22966+
22967+
This model is available in multiple quantized GGUF formats (e.g., Q4_K_M, Q6_K) for efficient local inference via llama.cpp and tools like LM Studio.
22968+
22969+
> ✅ *Note: The model is not quantized by default—quantized versions are provided separately by third parties (e.g., TheBloke, mradermacher). The original, full-precision version can be found at [godcodev/Hermes-2.5-Mistral-7B](https://huggingface.co/godcodev/Hermes-2.5-Mistral-7B).*
22970+
overrides:
22971+
parameters:
22972+
model: Hermes-2.5-Mistral-7B.i1-Q4_K_S.gguf
22973+
files:
22974+
- filename: Hermes-2.5-Mistral-7B.i1-Q4_K_S.gguf
22975+
sha256: e3f794c8325d1334410eb5479e83689b53246418fddc9f336b687c832cabbf0b
22976+
uri: huggingface://mradermacher/Hermes-2.5-Mistral-7B-i1-GGUF/Hermes-2.5-Mistral-7B.i1-Q4_K_S.gguf

0 commit comments

Comments
 (0)