forked from fatchord/WaveRNN
-
Notifications
You must be signed in to change notification settings - Fork 0
Update from ori #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Python version lower than 3.6 not supported.
+ Made quick_start.py and train_wavernn.py intelligently select device + Made tacotron.py and fatchord_version.py configurable for device + .gitignored .DS_Store from mac + Deleted extraneous slash in utils/paths.py ! Still need to update deepmind_version.py and vet the other files
! Device independence should be completed for everything. Have no intention of making notebooks device independent as of now, as they use their own versions of these classes (they may have been copied in to take care of relative import issue).
Added CPU support, other fixes too
+ Added `voc_clip_grad_norm` hparam + Enabled gradient clipping in vocoder + Added warning when gradient is nan in vocoder and synthesizer + Added workaround for broken nn.DataParallel in `utils.__init__.py` * Refactored `tacotron.py` and `fatchord_version.py` to store step in a PyTorch buffer instead of a gradient-less parameter * Refactored training code for WaveRNN and tacotron to automagically use data parallelism when in the presence of multiple GPUS. ! Note that your batch size must be divisible by the number of GPUS
* Made model.load() always map devices to the device of the model's parameters instead of always to CPU.
* Fixed a bug where checking to see if a tensor was using cuda was done via comparison to the cuda device. This is not the correct way of doing things, not sure how it even worked before.
+ Added type annotation for WaveRNN and Tacotron in train and generate files for each model * Fixed missing import for numpy in `fatchord_version.py` and `deepmind_version.py`
- Removed get_r and set_r * Moved ownership of self.r buffer from Tacotron to Decoder * Now using property decorator in Tacotron for self.r * Made all buffers reside on same device as parameters to fix issue where nn.parallel.gather expected tensor to be on a particular GPU but it was on a different device (CPU) instead. * Updated gen_tacotron.py and train_tacotron.py to use self.r property instead of getter and setter methods * Made all returned values from forward() tensors to ensure that nn.parallel.gather works properly
Enabled multi-gpu training, buffers, grad clip in vocoder, saving optimizer state, and more fixes
…ving optimizer state, and more fixes"
Revert "Enabled multi-gpu training, buffers, grad clip in vocoder, saving optimizer state, and more fixes"
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.