Skip to content

Commit 2c453c6

Browse files
authored
convert: add error message for mistral3 quantized weight (#17686)
1 parent 5d6bd84 commit 2c453c6

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

convert_hf_to_gguf.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2842,6 +2842,10 @@ def set_gguf_parameters(self):
28422842
self.gguf_writer.add_attn_temperature_scale(rope_params["llama_4_scaling_beta"])
28432843

28442844
def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None):
2845+
# TODO: probably not worth supporting quantized weight, as official BF16 is also available
2846+
if name.endswith("weight_scale_inv"):
2847+
raise ValueError("This is a quantized weight, please use BF16 weight instead")
2848+
28452849
name = name.replace("language_model.", "")
28462850
if "multi_modal_projector" in name or "vision_tower" in name:
28472851
return []

0 commit comments

Comments
 (0)