Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Before submitting
What does this PR do?
Fixes #306
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
I had ! Glad to contribute to the best Pytorch wrapper 🙃
Notes
The quality of the generated samples remains low, but I don't have the time to tune the architecture more. Also, applying tricks to reach Nash equilibrium like label smoothing could make the example less understandable, so I left it as it is.
This PR was motivated in the case of some heavy generator and discriminator that we don't want to call multiple times. I removed the repeated calls to the generator (which changes the optimization problem a bit, because the discriminator is trained to detect the same sample that the generator just generated) BUT I did not remove the repeated calls to the discriminator. In fact, trying
self.manual_backward(g_loss, retain_graph=True)
caused an error when backpropagating on the discriminator loss because the generator weights changed (RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
). If anyone knows how to remove the two calls to the discriminator, I'm curious!Logging is fixed and
on_validation_epoch_end
is now called