You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
0%| | 0/15000 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 132, in <module>
main()
File "train.py", line 114, in main
joint_train(reals, gens[:curr_stage], group_gan_models, lengths,
File "/nfs7/y50021900/ganimator/models/architecture.py", line 71, in joint_train
list(map(optimize_lambda, gan_models))
File "/nfs7/y50021900/ganimator/models/architecture.py", line 68, in <lambda>
optimize_lambda = lambda x: x.optimize_parameters(gen=True, disc=False, rec=False)
File "/nfs7/y50021900/ganimator/models/gan1d.py", line 164, in optimize_parameters
self.backward_G()
File "/nfs7/y50021900/ganimator/models/gan1d.py", line 136, in backward_G
loss_total.backward(retain_graph=True)
File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [429, 256, 1, 5]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
How can I fix it?
The text was updated successfully, but these errors were encountered:
Hi, thanks for your question. The skeleton_aware option is part of the legacy code that is not maintained any more. Such an error is very difficult to fix and we will probably remove it in the next version.
Hello, thank you for your geart work!
Here is a question. When I set skeleton_aware=0,
How can I fix it?
The text was updated successfully, but these errors were encountered: