Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Dec 28, 2024

Purpose

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

@kylesayrs kylesayrs marked this pull request as draft January 1, 2025 19:25
@kylesayrs kylesayrs changed the title Add skip_missing_weights_context Remove missing weights silencers in favor of HFQuantizer solution Jan 31, 2025
@kylesayrs
Copy link
Collaborator Author

This PR used to implement a skip_missing_weights_context, which silenced warnings about missing weights when loading a quantized model. Instead, @rahul-tuli will implement a more integrated solution into HF quantizer which silences these warnings directly, and this PR becomes simply the removal of the old code we had to silence these warnings

@kylesayrs
Copy link
Collaborator Author

@rahul-tuli's HFQuantizer changes must land before this lands

@kylesayrs kylesayrs self-assigned this Feb 7, 2025
@kylesayrs
Copy link
Collaborator Author

huggingface/transformers#36152 has landed

@kylesayrs kylesayrs marked this pull request as ready for review February 24, 2025 19:55
Copy link
Collaborator

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this change!

@kylesayrs kylesayrs enabled auto-merge (squash) March 11, 2025 16:15
@kylesayrs kylesayrs merged commit 2f7c620 into main Mar 12, 2025
8 checks passed
@kylesayrs kylesayrs deleted the kylesayrs/fast-load-context branch March 12, 2025 00:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants