Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using DynamicSoftMarginLoss #731

Open
inakierregueab opened this issue Nov 12, 2024 · 2 comments
Open

Error using DynamicSoftMarginLoss #731

inakierregueab opened this issue Nov 12, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@inakierregueab
Copy link

Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 128, 1, 1]] is at version 4; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

@inakierregueab
Copy link
Author

Initial error was: Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

@KevinMusgrave KevinMusgrave added the bug Something isn't working label Nov 15, 2024
@KevinMusgrave
Copy link
Owner

Thanks for the bug report. Can you provide a minimal amount of code that we can run to reproduce the error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants