Skip to content

Conversation

tarsbase
Copy link
Collaborator

@tarsbase tarsbase commented Sep 2, 2019

No description provided.

fatchord and others added 29 commits May 7, 2019 12:33
+ Made quick_start.py and train_wavernn.py intelligently select device
+ Made tacotron.py and fatchord_version.py configurable for device
+ .gitignored .DS_Store from mac
+ Deleted extraneous slash in utils/paths.py
! Still need to update deepmind_version.py and vet the other files
! Device independence should be completed for everything. Have no
  intention of making notebooks device independent as of now, as they
  use their own versions of these classes (they may have been copied
  in to take care of relative import issue).
Added CPU support, other fixes too
+ Added `voc_clip_grad_norm` hparam
+ Enabled gradient clipping in vocoder
+ Added warning when gradient is nan in vocoder and synthesizer
+ Added workaround for broken nn.DataParallel in `utils.__init__.py`
* Refactored `tacotron.py` and `fatchord_version.py` to store step
  in a PyTorch buffer instead of a gradient-less parameter
* Refactored training code for WaveRNN and tacotron to automagically
  use data parallelism when in the presence of multiple GPUS.
! Note that your batch size must be divisible by the number of GPUS
* Made model.load() always map devices to the device of the model's
  parameters instead of always to CPU.
* Fixed a bug where checking to see if a tensor was using cuda was
  done via comparison to the cuda device. This is not the correct
  way of doing things, not sure how it even worked before.
+ Added type annotation for WaveRNN and Tacotron in train and
  generate files for each model
* Fixed missing import for numpy in `fatchord_version.py` and
  `deepmind_version.py`
- Removed get_r and set_r
* Moved ownership of self.r buffer from Tacotron to Decoder
* Now using property decorator in Tacotron for self.r
* Made all buffers reside on same device as parameters to fix issue
  where nn.parallel.gather expected tensor to be on a particular
  GPU but it was on a different device (CPU) instead.
* Updated gen_tacotron.py and train_tacotron.py to use self.r
  property instead of getter and setter methods
* Made all returned values from forward() tensors to ensure that
  nn.parallel.gather works properly
Enabled multi-gpu training, buffers, grad clip in vocoder, saving optimizer state, and more fixes
Revert "Enabled multi-gpu training, buffers, grad clip in vocoder, saving optimizer state, and more fixes"
@tarsbase tarsbase merged commit d20e0a2 into tarsbase:master Sep 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants