Skip to content

Conversation

Bojun-Feng
Copy link
Contributor

@Bojun-Feng Bojun-Feng commented Dec 2, 2024

What does this PR do?

Fixes #35041

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

I followed the format of #30483, and don't think new documentation or tests are necessary for enabling KV quantization on a single model. Please let me know if I'm wrong.

Who can review?

@zucchini-nlp

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect thanks!

@zucchini-nlp
Copy link
Member

Don;t think we need to wait for core maintainer's review for this tiny change, so maybe @Rocketknight1 and we'll merge

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@Bojun-Feng
Copy link
Contributor Author

@Rocketknight1 Can we get this merged please?

@zucchini-nlp
Copy link
Member

oops sorry, merging

@zucchini-nlp zucchini-nlp merged commit 9661896 into huggingface:main May 20, 2025
17 checks passed
faaany pushed a commit to faaany/transformers that referenced this pull request May 21, 2025
xvyv99 pushed a commit to xvyv99/transformers that referenced this pull request May 21, 2025
redmoe-moutain pushed a commit to redmoe-moutain/transformers that referenced this pull request Jun 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable Quantize KV Cache for Mistral Model
3 participants