Skip to content

Commit 45ea3f1

Browse files
committed
Fix internlm after vllm-project/vllm#2860 (#2861)
1 parent 013087c commit 45ea3f1

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/model_executor/models/llama.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,8 @@ def __init__(
175175
self.self_attn = LlamaAttention(
176176
hidden_size=self.hidden_size,
177177
num_heads=config.num_attention_heads,
178-
num_kv_heads=config.num_key_value_heads,
178+
num_kv_heads=getattr(config, "num_key_value_heads",
179+
config.num_attention_heads),
179180
rope_theta=rope_theta,
180181
rope_scaling=rope_scaling,
181182
max_position_embeddings=max_position_embeddings,

0 commit comments

Comments
 (0)