Skip to content

Add support for GLM-Edge and GLM-Edge-V series models #10573

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 34 commits into from
Feb 2, 2025

Conversation

piDack
Copy link
Contributor

@piDack piDack commented Nov 29, 2024

This pull request support for the GLM-Edge-Chat 1.5B & 4B and GLM-Edge-V 2B & 5B series of models within the llama.cpp.

Note: The current model pretrain -> gguf only supports using the transformers version 4.47.0.dev0.

@github-actions github-actions bot added testing Everything test related examples python python script changes labels Nov 29, 2024
@arch-btw
Copy link
Contributor

Works great.

./llama-llava-cli -m ggml-model-Q4_K_M.gguf --mmproj mmproj-model-f16.gguf --temp 0.1 --image bee.jpg -p "<|system|>\n You are a helpful AI assistant. <image><|user|>\n What is in the image? <|assistant|>\n"

glm

@piDack
Copy link
Contributor Author

piDack commented Dec 19, 2024

Is there anyone available to review the code?

@arch-btw
Copy link
Contributor

@piDack As this would be adding support for GlmForCausalLM for these vision models, I'm curious to know if we could create a more modular or generic implementation that could also be used for the other GlmForCausalLM model(s)?

I'm asking because glm-4-9b-chat-hf is currently broken with the new transformers-only implementation:

python convert_hf_to_gguf.py /home/Models/glm-4-9b-chat-hf --outtype f32
INFO:hf-to-gguf:Loading model: glm-4-9b-chat-hf
ERROR:hf-to-gguf:Model GlmForCausalLM is not supported

The version with the custom python files still works but if we're moving away from that (related discussion), it might be best to support GlmForCausalLM in general.

Are there any parts of this PR that could be refactored or generalized for broader applicability so that we can support both and maybe upcoming models? Thank you.

@piDack
Copy link
Contributor Author

piDack commented Jan 7, 2025

@piDack As this would be adding support for GlmForCausalLM for these vision models, I'm curious to know if we could create a more modular or generic implementation that could also be used for the other GlmForCausalLM model(s)?

I'm asking because glm-4-9b-chat-hf is currently broken with the new transformers-only implementation:

python convert_hf_to_gguf.py /home/Models/glm-4-9b-chat-hf --outtype f32
INFO:hf-to-gguf:Loading model: glm-4-9b-chat-hf
ERROR:hf-to-gguf:Model GlmForCausalLM is not supported

The version with the custom python files still works but if we're moving away from that (related discussion), it might be best to support GlmForCausalLM in general.

Are there any parts of this PR that could be refactored or generalized for broader applicability so that we can support both and maybe upcoming models? Thank you.

I will tried to do it

@piDack piDack requested a review from ngxson January 30, 2025 12:50
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

many places still having inconsistent style

@ngxson
Copy link
Collaborator

ngxson commented Jan 30, 2025

@ggerganov Could you have a look quickly on llama.cpp? I gave my approval for the rest of changes.

@piDack piDack requested a review from ggerganov February 1, 2025 01:42
@piDack
Copy link
Contributor Author

piDack commented Feb 2, 2025

I believe it's ready to be merged into the master branch.

@ggerganov ggerganov merged commit 0cec062 into ggml-org:master Feb 2, 2025
47 checks passed
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
…rg#10573)

* add glm edge chat model

* use config partial_rotary_factor as rope ratio

* support for glm edge model

* vision model support

* remove debug info

* fix format

* llava.cpp trailing whitespace

* remove unused AutoTokenizer

* Update src/llama.cpp for not contain <|end|> or </s>

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* add edge template

* fix chat template

* fix confict

* fix confict

* fix ci err

* fix format err

* fix template err

* 9b hf chat support

* format

* format clip.cpp

* fix format

* Apply suggestions from code review

* Apply suggestions from code review

* Update examples/llava/clip.cpp

* fix format

* minor : style

---------

Co-authored-by: liyuhang <yuhang.li@zhipuai.cn>
Co-authored-by: piDack <pcdack@hotmail.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: liyuhang <yuhang.li@aminer.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
…rg#10573)

* add glm edge chat model

* use config partial_rotary_factor as rope ratio

* support for glm edge model

* vision model support

* remove debug info

* fix format

* llava.cpp trailing whitespace

* remove unused AutoTokenizer

* Update src/llama.cpp for not contain <|end|> or </s>

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* add edge template

* fix chat template

* fix confict

* fix confict

* fix ci err

* fix format err

* fix template err

* 9b hf chat support

* format

* format clip.cpp

* fix format

* Apply suggestions from code review

* Apply suggestions from code review

* Update examples/llava/clip.cpp

* fix format

* minor : style

---------

Co-authored-by: liyuhang <yuhang.li@zhipuai.cn>
Co-authored-by: piDack <pcdack@hotmail.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: liyuhang <yuhang.li@aminer.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
…rg#10573)

* add glm edge chat model

* use config partial_rotary_factor as rope ratio

* support for glm edge model

* vision model support

* remove debug info

* fix format

* llava.cpp trailing whitespace

* remove unused AutoTokenizer

* Update src/llama.cpp for not contain <|end|> or </s>

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* add edge template

* fix chat template

* fix confict

* fix confict

* fix ci err

* fix format err

* fix template err

* 9b hf chat support

* format

* format clip.cpp

* fix format

* Apply suggestions from code review

* Apply suggestions from code review

* Update examples/llava/clip.cpp

* fix format

* minor : style

---------

Co-authored-by: liyuhang <yuhang.li@zhipuai.cn>
Co-authored-by: piDack <pcdack@hotmail.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: liyuhang <yuhang.li@aminer.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
…rg#10573)

* add glm edge chat model

* use config partial_rotary_factor as rope ratio

* support for glm edge model

* vision model support

* remove debug info

* fix format

* llava.cpp trailing whitespace

* remove unused AutoTokenizer

* Update src/llama.cpp for not contain <|end|> or </s>

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* add edge template

* fix chat template

* fix confict

* fix confict

* fix ci err

* fix format err

* fix template err

* 9b hf chat support

* format

* format clip.cpp

* fix format

* Apply suggestions from code review

* Apply suggestions from code review

* Update examples/llava/clip.cpp

* fix format

* minor : style

---------

Co-authored-by: liyuhang <yuhang.li@zhipuai.cn>
Co-authored-by: piDack <pcdack@hotmail.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: liyuhang <yuhang.li@aminer.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples python python script changes testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants