Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GPTQ support for Gemma #3200

Merged
merged 2 commits into from
Mar 7, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions vllm/model_executor/models/gemma.py
Original file line number Diff line number Diff line change
Expand Up @@ -325,11 +325,17 @@ def load_weights(self,
if shard_name not in name:
continue
name = name.replace(shard_name, param_name)
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
continue
param = params_dict[name]
weight_loader = param.weight_loader
weight_loader(param, loaded_weight, shard_id)
break
else:
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you merge this check with the one above and place it outside the for loop?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copy them from https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py#L377-L379 and https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py#L385-L387.
Modifying the for-else structure may cause excessive modifications. I recommend reusing existing code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TechxGenus I think we can do this optimization on Gemma model file. Just put this check onto for (param_name, shard_name, shard_id) in stacked_params_mapping: without changing for-else code part.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TechxGenus I think we can do this optimization on Gemma model file. Just put this check onto for (param_name, shard_name, shard_id) in stacked_params_mapping: without changing for-else code part.

The quantized versions of gemma-2b and gemma-7b work fine, but will cause problems for future models with attention_bias set to true.

continue
# GemmaRMSNorm is different from Llama's in that it multiplies
# (1 + weight) to the output, instead of just weight.
if "norm.weight" in name:
Expand Down
Loading