forked from fishaudio/Bert-VITS2
-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev refactor #93
Merged
Merged
Dev refactor #93
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…from server_editor.py The logic has not been changed, only renaming, splitting and moving modules on a per-function basis. Existing code will be left in place for the time being to avoid breaking the training code, which is not subject to refactoring this time.
…OX to style_bert_vits2/text_processing/japanese/user_dict/
… definitions and comments
…tyle_bert_vits2/models/ The code has not yet been cleaned up, just moved.
… loaded BERT models/tokenizer and replace all from_pretrained() to load_model/load_tokenizer
…each language to style_bert_vits2/text_processing/(language)/bert_feature.py
… style_bert_vits2/text_processing/__init__.py This was often used in 3 function sets and felt like a wasteful division with few lines.
…VITS2 Since app.py and server_editor.py already exist as alternative Web UI, there is no need to revive webui.py in the future.
I have determined that this is excessive for this project at this time.
"text_processing" is clearer, but the import statement is longer. "nlp" is shorter and makes it clear that it is natural language processing.
pyopenjtalk_worker.initialize() has the side effect of starting another process and should not be executed automatically on import.
Closed
…consumption during training in the Web UI Since the BERT features of the dataset are pre-extracted by bert_gen.py, there is no need to load the BERT model at training time.
Include in sdist only the minimum required files for style-bert-vits2 as a library.
…PU VRAM consumption during training in the Web UI" This reverts commit e8a76e5.
細かな修正 (#92 を参照)
…consumption during training in the Web UI Since the BERT features of the dataset are pre-extracted by bert_gen.py, there is no need to load the BERT model at training time.
問題なく動きそうなのでdevへマージし、また他の機能追加等をしてから2.4へバージョンを上げることとします。 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
#92 のチェックと修正等