@@ -21,8 +21,8 @@ Model Spec 1 (pytorch, 1_5 Billion)
21
21
- **Model Size (in billions): ** 1_5
22
22
- **Quantizations: ** none
23
23
- **Engines **: Transformers
24
- - **Model ID: ** THUDM /glm-edge-1.5b-chat
25
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-1.5b-chat >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat >`__
24
+ - **Model ID: ** zai-org /glm-edge-1.5b-chat
25
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-1.5b-chat >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat >`__
26
26
27
27
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
28
28
chosen quantization method from the options listed above::
@@ -37,8 +37,8 @@ Model Spec 2 (pytorch, 4 Billion)
37
37
- **Model Size (in billions): ** 4
38
38
- **Quantizations: ** none
39
39
- **Engines **: Transformers
40
- - **Model ID: ** THUDM /glm-edge-4b-chat
41
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-4b-chat >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat >`__
40
+ - **Model ID: ** zai-org /glm-edge-4b-chat
41
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-4b-chat >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat >`__
42
42
43
43
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
44
44
chosen quantization method from the options listed above::
@@ -53,8 +53,8 @@ Model Spec 3 (ggufv2, 1_5 Billion)
53
53
- **Model Size (in billions): ** 1_5
54
54
- **Quantizations: ** Q4_0, Q4_1, Q4_K, Q4_K_M, Q4_K_S, Q5_0, Q5_1, Q5_K, Q5_K_M, Q5_K_S, Q6_K, Q8_0
55
55
- **Engines **: llama.cpp
56
- - **Model ID: ** THUDM /glm-edge-1.5b-chat-gguf
57
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-1.5b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat-gguf >`__
56
+ - **Model ID: ** zai-org /glm-edge-1.5b-chat-gguf
57
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-1.5b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat-gguf >`__
58
58
59
59
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
60
60
chosen quantization method from the options listed above::
@@ -69,8 +69,8 @@ Model Spec 4 (ggufv2, 1_5 Billion)
69
69
- **Model Size (in billions): ** 1_5
70
70
- **Quantizations: ** F16
71
71
- **Engines **: llama.cpp
72
- - **Model ID: ** THUDM /glm-edge-1.5b-chat-gguf
73
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-1.5b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat-gguf >`__
72
+ - **Model ID: ** zai-org /glm-edge-1.5b-chat-gguf
73
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-1.5b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat-gguf >`__
74
74
75
75
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
76
76
chosen quantization method from the options listed above::
@@ -85,8 +85,8 @@ Model Spec 5 (ggufv2, 4 Billion)
85
85
- **Model Size (in billions): ** 4
86
86
- **Quantizations: ** Q4_0, Q4_1, Q4_K, Q4_K_M, Q4_K_S, Q5_0, Q5_1, Q5_K, Q5_K_M, Q5_K_S, Q6_K, Q8_0
87
87
- **Engines **: llama.cpp
88
- - **Model ID: ** THUDM /glm-edge-4b-chat-gguf
89
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-4b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat-gguf >`__
88
+ - **Model ID: ** zai-org /glm-edge-4b-chat-gguf
89
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-4b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat-gguf >`__
90
90
91
91
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
92
92
chosen quantization method from the options listed above::
@@ -101,8 +101,8 @@ Model Spec 6 (ggufv2, 4 Billion)
101
101
- **Model Size (in billions): ** 4
102
102
- **Quantizations: ** F16
103
103
- **Engines **: llama.cpp
104
- - **Model ID: ** THUDM /glm-edge-4b-chat-gguf
105
- - **Model Hubs **: `Hugging Face <https://huggingface.co/THUDM /glm-edge-4b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat-gguf >`__
104
+ - **Model ID: ** zai-org /glm-edge-4b-chat-gguf
105
+ - **Model Hubs **: `Hugging Face <https://huggingface.co/zai-org /glm-edge-4b-chat-gguf >`__, `ModelScope <https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat-gguf >`__
106
106
107
107
Execute the following command to launch the model, remember to replace ``${quantization} `` with your
108
108
chosen quantization method from the options listed above::
0 commit comments