Skip to content

The for loop of optimizer.step() and loss.backward(retain_graph=True) #1

@WithChameleon

Description

@WithChameleon

Traceback (most recent call last):
File "F:\Aseq_DIP-main\denoising_main.py", line 167, in
loss.backward(retain_graph=True)
File "C:\Users\admin\anaconda3\envs\Hyperspectral\Lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\admin\anaconda3\envs\Hyperspectral\Lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 64, 1, 1]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

This is the problem when i run the author's denoising code.
Is there some problem with the order of optimizer.step() and loss.backward(retain_graph=True), when I try to remove the for loop in the optimizer and switch the order of the two, the program works fine, is this in line with the original paper in Neural IPS?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions