Skip to content

Misc. bug: cannot convert GLM-4-9B-Chat (glm-4-9b-chat-hf) to GGUF format #11263

Closed
@MoonRide303

Description

@MoonRide303

Name and Version

.\llama-cli.exe --version
version: 4491 (c67cc98)
built with MSVC 19.39.33523.0 for x64

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

Python/Bash scripts

Command line

python convert_hf_to_gguf.py --outtype f16 ..\glm-4-9b-chat-hf\ --outfile glm-4-9b-chat-hf-F16.gguf

Problem description & steps to reproduce

Despite ChatGLM4-9b being marked as supported attempt to convert GLM-4-9B-Chat model (glm-4-9b-chat-hf, command above) to GGUF format fails, with the following error:

INFO:hf-to-gguf:Loading model: glm-4-9b-chat-hf
ERROR:hf-to-gguf:Model GlmForCausalLM is not supported

First Bad Commit

No response

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions