-
Notifications
You must be signed in to change notification settings - Fork 1k
[model] support glm4_6v flash #6959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[model] support glm4_6v flash #6959
Conversation
Summary of ChangesHello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the GLM-4.6V-Flash model into the system, enhancing its capabilities by supporting a new large language model. The changes include the necessary code modifications to recognize and configure the model, alongside updates to the user-facing documentation to reflect its availability and associated prerequisites. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the GLM-4.6V-Flash model. The changes include registering the model and updating the documentation. My review focuses on improving the consistency of model type naming for better user experience and on highlighting the risk of depending on a development version of a library.
| ModelGroup( | ||
| [ | ||
| Model('ZhipuAI/GLM-4.6V-Flash', 'zai-org/GLM-4.6V-Flash'), | ||
| ], | ||
| requires=['transformers>=5.0.0.dev'], | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better clarity and consistency with other GLM-V models like glm4_1v and glm4_5v, it's recommended to introduce a new model type for GLM-4.6V-Flash, such as glm4_6v. This will make the model selection more intuitive for users, who would otherwise have to use --model_type glm4_1v for a 4.6V model, which is confusing.
This can be achieved by:
- Defining a new
MLLMModelType.glm4_6v. - Creating a new
ModelMetaregistration for it. You can likely reuseget_model_tokenizer_glm4_1vandTemplateType.glm4_1vif they are compatible. - Updating the documentation in
docs/to useglm4_6vas the model type.
| [ | ||
| Model('ZhipuAI/GLM-4.6V-Flash', 'zai-org/GLM-4.6V-Flash'), | ||
| ], | ||
| requires=['transformers>=5.0.0.dev'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.