Skip to content

Commit 2250446

Browse files
add bias in vllm moe (vllm-project#15)
Signed-off-by: Ma, Liangliang <liangliang.ma@intel.com>
1 parent 9426843 commit 2250446

File tree

1 file changed

+2
-0
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+2
-0
lines changed

vllm/model_executor/layers/fused_moe/layer.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -936,7 +936,9 @@ def forward_xpu(
936936
return xpu_fused_moe(
937937
hidden_states=x,
938938
w13=layer.w13_weight,
939+
w13_bias=layer.w13_bias if self.moe.has_bias else None,
939940
w2=layer.w2_weight,
941+
w2_bias=layer.w2_bias if self.moe.has_bias else None,
940942
topk_weights=routing_weights,
941943
topk_ids=selected_experts,
942944
n_experts_per_token=top_k,

0 commit comments

Comments
 (0)