Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RWKV backward on GPU #23774

Merged
merged 1 commit into from
May 26, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 3 additions & 6 deletions src/transformers/models/rwkv/modeling_rwkv.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ def forward(ctx, time_decay, time_first, key, value, state=None, return_state=Fa

@staticmethod
# g stands for grad
def backward(ctx, g_output):
def backward(ctx, g_output, g_state=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the variable is not used later on?

Copy link
Collaborator Author

@sgugger sgugger May 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we don't handle the gradient of the state. But autograd is not happy if this isn't here: it want one input gradient per output of the forward.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for explaning!

input_dtype = ctx.input_dtype

time_decay, time_first, key, value, output = ctx.saved_tensors
Expand Down Expand Up @@ -188,17 +188,14 @@ def backward(ctx, g_output):
g_key,
g_value,
)
g_time_decay = torch.sum(g_time_decay, dim=0)
g_time_first = torch.sum(g_time_first, dim=0)

return (
None,
None,
None,
Comment on lines -191 to -197
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also just out of curiosity why the number of outputs changed ? 🤔

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because autograd wants one gradient per input of the forward :-)
It used to have three ints then the variables, but no state and no bool, this PR adapts it to the changes I made.

g_time_decay.to(input_dtype),
g_time_first.to(input_dtype),
g_key.to(input_dtype),
g_value.to(input_dtype),
None,
None,
)


Expand Down