Skip to content

Conversation

@mengniwang95
Copy link

@mengniwang95 mengniwang95 commented Apr 4, 2025

Previously, when we use INC to convert deepseek FP8 model, we need this commit to remove extra converts in KVCache but actually GC can remove them during graph optimization theoretically.
Furthermore, the change in commit is not aligned with the design of INC patched module, which wants to keep the returned tensor BF16 because we can't make sure users' next operation.
So, I update the modeling file to make GC can work for patched KVCache pattern of deepseek model.
Since next release is very close and GC currently can not work as expection during decode stage, it is still a workround. We will root cause and fix it from source in next relase.

This PR should work together with this PR: intel/neural-compressor#2165

Signed-off-by: Mengni Wang <mengni.wang@intel.com>
Copy link

@xuechendi xuechendi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@xuechendi xuechendi merged commit a26e777 into HabanaAI:deepseek_r1 Apr 4, 2025
4 of 31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants