-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix training of pipeline based peft's lora model #5477
Conversation
@duli2012 Hi, I'm not sure if this pull request meet the project's requirement ? Or any suggestions on this PR, expect your reply :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xuanhua Sorry for the delay. Let's merge this after the tests pass.
@tohtana , thank you for your reply, I saw some unit test failures above, do I need to look into it ? |
@xuanhua I wonder if this is an issue on our CI. Let us take a look and restart after it is fixed. |
Hi, guys
I find there is an assert failure when I train huggingface's lora based model in pipeline style.
Here is the whole steps that I created my model:
get_peft_model(...)
and myLoraConfig(...)
from Model_A to create the lora model, as Model_BAnd I run Model_C under 2 3090ti GPUs. And the assertion failure looks like this:
After some debugging, I find out the root cause is that my configuration of lora (in below) only add extra lora layer(part) in qkv related layers but not the embedding layer. So the whole embedding layer's parameters are freezed.
And in my implementation of pipeline based model, I declared the embeding layer as a tied-layer. So the whole thing is that there are no gradients at all for embedding layer, but embedding layer as the tied layer needs to be synced between two gpus. The value of gradient is None but is still passed to
all_reduce
operation.Current, my fix is simple and add a check if this
grad
is None.