Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] Add 4bit QLora #383

Merged
merged 2 commits into from
May 24, 2023
Merged

[core] Add 4bit QLora #383

merged 2 commits into from
May 24, 2023

Conversation

younesbelkada
Copy link
Contributor

@younesbelkada younesbelkada commented May 23, 2023

What does this PR do?

Adds 4bit QLora quantization support to TRL

Related: huggingface/transformers#23479

Users just needs to replace load_in_8bit with load_in_4bit

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 23, 2023

The documentation is not available anymore as the PR was closed or merged.

@younesbelkada younesbelkada merged commit 5fb5af7 into main May 24, 2023
@younesbelkada younesbelkada deleted the add-4bit branch May 24, 2023 11:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants