From 65958eaa41d8d072dccb931e9523fdf22ec842bc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eren=20G=C3=B6lge?= Date: Sun, 27 Jun 2021 20:55:20 +0200 Subject: [PATCH] Add preliminary sphinx documentation --- .gitignore | 2 +- Makefile | 26 ++- README.md | 203 ++---------------- docs/Makefile | 20 ++ docs/README.md | 0 docs/requirements.txt | 5 + docs/source/_static/logo.png | Bin 0 -> 38890 bytes docs/source/audio_processor.md | 25 +++ docs/source/conf.py | 102 +++++++++ docs/source/configuration.md | 59 +++++ docs/source/contributing.md | 3 + docs/source/converting_torch_to_tf.md | 21 ++ docs/source/dataset.md | 25 +++ docs/source/faq.md | 114 ++++++++++ docs/source/formatting_your_dataset.md | 82 +++++++ docs/source/implementing_a_new_model.md | 61 ++++++ docs/source/index.md | 40 ++++ docs/source/inference.md | 103 +++++++++ docs/source/installation.md | 39 ++++ docs/source/make.bat | 35 +++ docs/source/model_api.md | 24 +++ docs/source/readthedocs.yml | 17 ++ docs/source/trainer_api.md | 17 ++ docs/source/training_a_model.md | 165 ++++++++++++++ docs/source/tts_datasets.md | 16 ++ docs/source/tutorial_for_nervous_beginners.md | 175 +++++++++++++++ docs/source/what_makes_a_good_dataset.md | 19 ++ 27 files changed, 1200 insertions(+), 198 deletions(-) create mode 100644 docs/Makefile create mode 100644 docs/README.md create mode 100644 docs/requirements.txt create mode 100644 docs/source/_static/logo.png create mode 100644 docs/source/audio_processor.md create mode 100644 docs/source/conf.py create mode 100644 docs/source/configuration.md create mode 100644 docs/source/contributing.md create mode 100644 docs/source/converting_torch_to_tf.md create mode 100644 docs/source/dataset.md create mode 100644 docs/source/faq.md create mode 100644 docs/source/formatting_your_dataset.md create mode 100644 docs/source/implementing_a_new_model.md create mode 100644 docs/source/index.md create mode 100644 docs/source/inference.md create mode 100644 docs/source/installation.md create mode 100644 docs/source/make.bat create mode 100644 docs/source/model_api.md create mode 100644 docs/source/readthedocs.yml create mode 100644 docs/source/trainer_api.md create mode 100644 docs/source/training_a_model.md create mode 100644 docs/source/tts_datasets.md create mode 100644 docs/source/tutorial_for_nervous_beginners.md create mode 100644 docs/source/what_makes_a_good_dataset.md diff --git a/.gitignore b/.gitignore index c4647723b0..1b174834cd 100644 --- a/.gitignore +++ b/.gitignore @@ -140,7 +140,7 @@ events.out* old_configs/* model_importers/* model_profiling/* -docs/* +docs/source/TODO/* .noseids .dccache log.txt diff --git a/Makefile b/Makefile index 70b7e34aea..c7815f1917 100644 --- a/Makefile +++ b/Makefile @@ -6,16 +6,6 @@ help: target_dirs := tests TTS notebooks -system-deps: ## install linux system deps - sudo apt-get install -y libsndfile1-dev - -dev-deps: ## install development deps - pip install -r requirements.dev.txt - pip install -r requirements.tf.txt - -deps: ## install 🐸 requirements. - pip install -r requirements.txt - test_all: ## run tests and don't stop on an error. nosetests --with-cov -cov --cover-erase --cover-package TTS tests --nologcapture --with-id ./run_bash_tests.sh @@ -34,5 +24,21 @@ style: ## update code style. lint: ## run pylint linter. pylint ${target_dirs} +system-deps: ## install linux system deps + sudo apt-get install -y libsndfile1-dev + +dev-deps: ## install development deps + pip install -r requirements.dev.txt + pip install -r requirements.tf.txt + +doc-deps: ## install docs dependencies + pip install -r docs/requirements.txt + +hub-deps: ## install deps for torch hub use + pip install -r requirements.hub.txt + +deps: ## install 🐸 requirements. + pip install -r requirements.txt + install: ## install 🐸 TTS for development. pip install -e .[all] diff --git a/README.md b/README.md index 92c2ee5216..842a16d015 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,7 @@ [![CircleCI](https://github.com/coqui-ai/TTS/actions/workflows/main.yml/badge.svg)]() [![License]()](https://opensource.org/licenses/MPL-2.0) +[![Docs]()](https://tts.readthedocs.io/en/latest/) [![PyPI version](https://badge.fury.io/py/TTS.svg)](https://badge.fury.io/py/TTS) [![Covenant](https://camo.githubusercontent.com/7d620efaa3eac1c5b060ece5d6aacfcc8b81a74a04d05cd0398689c01c4463bb/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d76322e3025323061646f707465642d6666363962342e737667)](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md) [![Downloads](https://pepy.tech/badge/tts)](https://pepy.tech/project/tts) @@ -16,12 +17,10 @@ 📢 [English Voice Samples](https://erogol.github.io/ddc-samples/) and [SoundCloud playlist](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2) -👩🏽‍🍳 [TTS training recipes](https://github.com/erogol/TTS_recipes) - 📄 [Text-to-Speech paper collection](https://github.com/erogol/TTS-papers) ## 💬 Where to ask questions -Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it. +Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it. | Type | Platforms | | ------------------------------- | --------------------------------------- | @@ -40,14 +39,11 @@ Please use our dedicated channels for questions and discussion. Help is much mor ## 🔗 Links and Resources | Type | Links | | ------------------------------- | --------------------------------------- | +| 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/) | 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)| | 👩‍💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)| | 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378) -| 👩🏾‍🏫 **Tutorials and Examples** | [TTS/Wiki](https://github.com/coqui-ai/TTS/wiki/%F0%9F%90%B8-TTS-Notebooks,-Examples-and-Tutorials) | | 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)| -| 🖥️ **Demo Server** | [TTS/server](https://github.com/coqui-ai/TTS/tree/master/TTS/server)| -| 🤖 **Synthesize speech** | [TTS/README.md](https://github.com/coqui-ai/TTS#example-synthesizing-speech-on-terminal-using-the-released-models)| -| 🛠️ **Implementing a New Model** | [TTS/Wiki](https://github.com/coqui-ai/TTS/wiki/Implementing-a-New-Model-in-%F0%9F%90%B8TTS)| ## 🥇 TTS Performance

@@ -56,20 +52,19 @@ Underlined "TTS*" and "Judy*" are 🐸TTS models ## Features -- High performance Deep Learning models for Text2Speech tasks. +- High-performance Deep Learning models for Text2Speech tasks. - Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). - Speaker Encoder to compute speaker embeddings efficiently. - Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN) - Fast and efficient model training. -- Detailed training logs on console and Tensorboard. -- Support for multi-speaker TTS. -- Efficient Multi-GPUs training. +- Detailed training logs on the terminal and Tensorboard. +- Support for Multi-speaker TTS. +- Efficient, flexible, lightweight but feature complete `Trainer API`. - Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference. -- Released models in PyTorch, Tensorflow and TFLite. +- Released and read-to-use models. - Tools to curate Text2Speech datasets under```dataset_analysis```. -- Demo server for model testing. -- Notebooks for extensive model benchmarking. -- Modular (but not too much) code base enabling easy testing for new ideas. +- Utilities to use and test your models. +- Modular (but not too much) code base enabling easy implementation of new ideas. ## Implemented Models ### Text-to-Spectrogram @@ -98,8 +93,9 @@ Underlined "TTS*" and "Judy*" are 🐸TTS models - WaveRNN: [origin](https://github.com/fatchord/WaveRNN/) - WaveGrad: [paper](https://arxiv.org/abs/2009.00713) - HiFiGAN: [paper](https://arxiv.org/abs/2010.05646) +- UnivNet: [paper](https://arxiv.org/abs/2106.07889) -You can also help us implement more models. Some 🐸TTS related work can be found [here](https://github.com/erogol/TTS-papers). +You can also help us implement more models. ## Install TTS 🐸TTS is tested on Ubuntu 18.04 with **python >= 3.6, < 3.9**. @@ -110,7 +106,7 @@ If you are only interested in [synthesizing speech](https://github.com/coqui-ai/ pip install TTS ``` -By default this only installs the requirements for PyTorch. To install the tensorflow dependencies as well, use the `tf` extra. +By default, this only installs the requirements for PyTorch. To install the tensorflow dependencies as well, use the `tf` extra. ```bash pip install TTS[tf] @@ -123,12 +119,6 @@ git clone https://github.com/coqui-ai/TTS pip install -e .[all,dev,notebooks,tf] # Select the relevant extras ``` -We use ```espeak-ng``` to convert graphemes to phonemes. You might need to install separately. - -```bash -sudo apt-get install espeak-ng -``` - If you are on Ubuntu (Debian), you can also run following commands for installation. ```bash @@ -137,6 +127,7 @@ $ make install ``` If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system). + ## Directory Structure ``` |- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.) @@ -147,6 +138,7 @@ If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](ht |- distribute.py (train your TTS model using Multiple GPUs.) |- compute_statistics.py (compute dataset statistics for normalization.) |- convert*.py (convert target torch model to TF.) + |- ... |- tts/ (text to speech models) |- layers/ (model layer definitions) |- models/ (model definitions) @@ -156,167 +148,4 @@ If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](ht |- (same) |- vocoder/ (Vocoder models.) |- (same) -``` - -## Sample Model Output -Below you see Tacotron model state after 16K iterations with batch-size 32 with LJSpeech dataset. - -> "Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning." - -Audio examples: [soundcloud](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2) - -example_output - -## Datasets and Data-Loading -🐸TTS provides a generic dataloader easy to use for your custom dataset. -You just need to write a simple function to format the dataset. Check ```datasets/preprocess.py``` to see some examples. -After that, you need to set ```dataset``` fields in ```config.json```. - -Some of the public datasets that we successfully applied 🐸TTS: - -- [LJ Speech](https://keithito.com/LJ-Speech-Dataset/) -- [Nancy](http://www.cstr.ed.ac.uk/projects/blizzard/2011/lessac_blizzard2011/) -- [TWEB](https://www.kaggle.com/bryanpark/the-world-english-bible-speech-dataset) -- [M-AI-Labs](http://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) -- [LibriTTS](https://openslr.org/60/) -- [Spanish](https://drive.google.com/file/d/1Sm_zyBo67XHkiFhcRSQ4YaHPYM0slO_e/view?usp=sharing) - thx! @carlfm01 - -## Example: Synthesizing Speech on Terminal Using the Released Models. - - -After the installation, 🐸TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under 🐸TTS. - -Listing released 🐸TTS models. - -```bash -tts --list_models -``` - -Run a TTS model, from the release models list, with its default vocoder. (Simply copy and paste the full model names from the list as arguments for the command below.) - -```bash -tts --text "Text for TTS" \ - --model_name "///" \ - --out_path folder/to/save/output.wav -``` - -Run a tts and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model. - -```bash -tts --text "Text for TTS" \ - --model_name "///" \ - --vocoder_name "///" \ - --out_path folder/to/save/output.wav -``` - -Run your own TTS model (Using Griffin-Lim Vocoder) - -```bash -tts --text "Text for TTS" \ - --model_path path/to/model.pth.tar \ - --config_path path/to/config.json \ - --out_path folder/to/save/output.wav -``` - -Run your own TTS and Vocoder models - -```bash -tts --text "Text for TTS" \ - --config_path path/to/config.json \ - --model_path path/to/model.pth.tar \ - --out_path folder/to/save/output.wav \ - --vocoder_path path/to/vocoder.pth.tar \ - --vocoder_config_path path/to/vocoder_config.json -``` - -Run a multi-speaker TTS model from the released models list. - -```bash -tts --model_name "///" --list_speaker_idxs # list the possible speaker IDs. -tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "//" --speaker_idx "" -``` - -**Note:** You can use ```./TTS/bin/synthesize.py``` if you prefer running ```tts``` from the TTS project folder. - -## Example: Using the Demo Server for Synthesizing Speech - - - - -You can boot up a demo 🐸TTS server to run inference with your models. Note that the server is not optimized for performance -but gives you an easy way to interact with the models. - -The demo server provides pretty much the same interface as the CLI command. - -```bash -tts-server -h # see the help -tts-server --list_models # list the available models. -``` - -Run a TTS model, from the release models list, with its default vocoder. -If the model you choose is a multi-speaker TTS model, you can select different speakers on the Web interface and synthesize -speech. - -```bash -tts-server --model_name "///" -``` - -Run a TTS and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model. - -```bash -tts-server --model_name "///" \ - --vocoder_name "///" -``` - - -## Example: Training and Fine-tuning LJ-Speech Dataset -Here you can find a [CoLab](https://gist.github.com/erogol/97516ad65b44dbddb8cd694953187c5b) notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below. - -To start with, split ```metadata.csv``` into train and validation subsets respectively ```metadata_train.csv``` and ```metadata_val.csv```. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go. - -``` -shuf metadata.csv > metadata_shuf.csv -head -n 12000 metadata_shuf.csv > metadata_train.csv -tail -n 1100 metadata_shuf.csv > metadata_val.csv -``` - -To train a new model, you need to define your own ```config.json``` to define model details, trainin configuration and more (check the examples). Then call the corressponding train script. - -For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps. - -```bash -python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json -``` - -To fine-tune a model, use ```--restore_path```. - -```bash -python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar -``` - -To continue an old training run, use ```--continue_path```. - -```bash -python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/ -``` - -For multi-GPU training, call ```distribute.py```. It runs any provided train script in multi-GPU setting. - -```bash -CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json -``` - -Each run creates a new output folder accomodating used ```config.json```, model checkpoints and tensorboard logs. - -In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed. - -You can also enjoy Tensorboard, if you point Tensorboard argument```--logdir``` to the experiment folder. - -## [Contribution guidelines](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md) -### Acknowledgement -- https://github.com/keithito/tacotron (Dataset pre-processing) -- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture) -- https://github.com/kan-bayashi/ParallelWaveGAN (GAN based vocoder library) -- https://github.com/jaywalnut310/glow-tts (Original Glow-TTS implementation) -- https://github.com/fatchord/WaveRNN/ (Original WaveRNN implementation) -- https://arxiv.org/abs/2010.05646 (Original HiFiGAN implementation) +``` \ No newline at end of file diff --git a/docs/Makefile b/docs/Makefile new file mode 100644 index 0000000000..92dd33a1a4 --- /dev/null +++ b/docs/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = source +BUILDDIR = _build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 0000000000..73abe83feb --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,5 @@ +furo +myst-parser == 0.15.1 +sphinx == 4.0.2 +sphinx_inline_tabs +sphinx_copybutton \ No newline at end of file diff --git a/docs/source/_static/logo.png b/docs/source/_static/logo.png new file mode 100644 index 0000000000000000000000000000000000000000..6a1185c0966f9731a0e0f1878cc95a757d97107a GIT binary patch literal 38890 zcmeFYWmH_mnAtTs1 z!VdrdEvBEQu7|prH-(Fvvz4ubC54BtizS7nkF6B|;Imd=V4Fq5_pai_9j_a9iXwaP zTS0rcaZqqJSt@M>S4GWo1&ee9)yAYPtP29B_QliIOXtPQmCEVxN5i^BnKA?_oWwp?j6w*{pE-zTrOjeQ5BTp7q_T zAe|oLr<-7Fi+isfcl8AN$o)qmjVBh0rT?14aFNM!p9sI*c~ljlGy@ zT^zlAdSHAyycdO|U7Wro?R~hhS|Z;He*CE#)cttj_>Ejdpq{+wscLH)cW}D>=GSHS z$4>w0qYIgL4z|BOR9_nSihh&e6|K5|KTf{4H86em?C}hgyTrl+A%*F1@yXv@M7R-xB^%siRdSGJ(}If9t=wpD{#KmLj?h&Z^8wrl$Qa!=bP z;0nEq3!V)>16kSRnK(Gezr;WDn)2b!qCrmPdT>T6@FP{p9t?R zcZX{H!Uzp)Nk)gB5_Vojg3ZnML1O)k$VufAPprgnbYp}uq3r$WG|Is5iaze|g&l5l2> zO<|w@SKL#;8yGTEtOi;jUPE6Id6(lOiYPU4qc@5`bNWUt39I)CQ zHFwA8|9xFs(c4+~NSD96NQuL|;OFPz&7VFxUG(d`5=FFarx$gxZ$Hf~y-qB>vE|!h z6K&~muf4T6Z`wV#uo_sK?xJ7Xv|lqj6wzCBBF0AgANKkWz z&9D-vF@4nGV@x*)i3$+R^h+qQ;!TUA?;q6$tDLwKSPxzCUI$$dNT~gE{{hc4Ha5$wzeU~1T5>z*FF8-e3lSoJvY*U|EU5r@!I#|jSMBMJ8g=o7E0}*$CTX;OV0EN z>tp@S!j%44Vu$0IIOhk_I&hqg%9oa(V|b&S9oOIAOUTzY>zu1QwSURIiD*54M+4o~ zq^u%?rqNnXP(Xem@X?ctK{@U%gOJ1Koj2}otD&AWhg~pk>_@B4&K$MJrp355i zxQLh=RCCRpVR^WXCG)c~r?lHbOTp;HJ|127FrluNO6i`P_VkUmi-+Ye$nWhQ-ih^s zbes6#IzlsN3}TSO@iO#=FnsZ z?kfc^F_icZjetI8ERGIOq0FDfkt11f*h~qwTmxiQigVQAblSh*S{)wlsHMxiljULk zP7D8uc$TPZ+PN>l>I9vl$AQn=;IUR42T_r?GZ&z+SGtTphWvxh9IKdCuOKGYWA+{l&9UdVe9gLE)h zen_a{lC-NVCi=349AGzVNavrT;hJy4nn$f!?eE#qaG5HsvxLx|HCJb(a2q6>@6 zWIpf^L)ae$Q-Q}7ZUlGlf{CE|ZFK|r@~3=WoreT7@gea{;ceh`o!}5!S8az}y8$z! zFwEELxl*rmsmZH7khvrSwqhtUgGselmdD+y`hB?v(!wh@$87=BP@pq_oI1UDsD6w>|_S6 zs68UWcmeIS<~dCc?of@t^UCa(H@-QYBbDX?NllGS*Gabyihb7mHN5_{N3hx`nFQ}6 zuyyyP%Rm1z*5~km{-OFSonbLZZg5Uc19ziGC8rVW79I#sXu**Mt=fm^>7KbhkFrLi zy)aNcK)t7JetG@TfM5W17+!E6{=HebGYqeq>)jWIpCAF zoutFN5;Vbc`)^$z+`;;Hib9s1^ zNEGzAuA^*1`156z6|6NRWp-MyH;Irw@&@?K)*0c_V@79uFbyyg9-EFMPlcGa^qw;YHy;6lz7F};S1 z)6sNC=!H~8v!b*nNRjP>B+*)ABd#XEW!2GCW75GELHahStHNr>@#|izSdHI4(aA&m zQ^raw_J}KNgxmOD86g2rt zGT2JAh$3d@gcvdV70-Pso`<78G2Ft<3H_P-O%bY0YussG!`J4-u+fG0oO*TujSB?amym zb)KpWD+KSOJ=L3FB4bjM0KXI-lR9zy(732l=$!!=|*!dz2poLzvBRF_@9 zg{`7lfX2lF+kAnUIH0})SA6G#ga~F(KT&;%{=v9R0SgVsX2cQg!`i6)?p-`v=WgYI zOq>>opOu7W7J)6Zlc$&lle{_3m^71o8#P>nry;43gz};m2esH!oT;;JGO~l=N}huA zF+>N`QhKafD}m<2`nWru4WB^s@(m5p2WL|tqZ?6LlyWpd6}_CdQJ}9~w@;e~&6u3% z;@!&(teAL14@r9)mmRg@AayE1LV}Gw9cui^d29D5LB}RjB+Ro}eq{mvb@`qdrX?e~ zY+>=~SW=x8lHBm7M(i}`n_V5_!0jjz*?SCKgUlEhAqRXtxX>#ze&$aTfXAEtYk5um}DVG?}3-5 zfA~3!qU<{zyu%qm8@yD>-A;eZW08apSUtYxa*BTNUK39f&2OS9MRjF}NB*UPELtt5 zJo$#0*Q^EMDr%ny9({vB$<;g@wcP`0oIq_v*r}~!hEa?~P#}h-S8W;Ttu$0(y$04k zmBK>>iGHt{Jp)*-vb!{+R|oqpzD?4x`0OeQCp~-@uCBmX2*x}SwzFocvr~UWl&Wdw z?xpl?pSs zL^W9`zMK>N_39yB$5+I!HWWe4j6vDVh-aMaQcPlQ5ya;Sd9U6IsNPdlm}?Nkt);Ai z39v#JkF^C=Va`%6OWvBzAYd1Nm7|bmxsH2KmE5*_i6M_e!XD}+$nc^k#k~GnmOLe1 z{QQnV-lR+HqtaD~f@8+#)Zq7tZ^i>lYwTV5g^T(#CK9tH4o#1tW)~Jj0ydC5(P0QA z@=K@$F}GH<(v2@-K)e#5T+Xir^gYgInT)(WOG@uGFxq-*|ZT8G9!e1C!7yTprCl2q2NZ@uGhAH=|%XJ~D{GZ=8FOg4&l?RTNIBP3eYAWuXy=(meAAmT;w zHTC&XGUbAMs`{2=pr%BCSHC5i^OZW=28jDaW7i5?Y>w z&@WCjyi#ySn7>_;waY2;f+=4|29i%q)TyO@g)!tO52+!{iubS5&iNGe>)OKe^{hew zAvS=9sRrXQM)T_ASt6wP3ir#{M`=ufV$M#4XtRjWgMmZU`7FPY!^d{=b5$2YJQ@w! zfy!$TQ|q_V&obG|YP2fyjC@+;5B0fd8}X!yN?%@gP{olOBS44r2k0k3b?OgjW+ON_ zK}}G0yMg>q1K>)S3BC~S3|o84!SA%f8}dVV%gJsqsAQBUET-XJV-PqGk`HqiTgG>UMihkQ$}^Z%mkNWAl?N{ zbLvEnz4aU3Pt|K_raZ&JCPxx@kk7gZ$qag&yqNZYiR)a*wpicp!9*k~@ouKSiWtMp zx+QWkb$}tj%&q+JBJz4e4_Y`v$k)Ruvr;D6`?^2e@_nW3p_!Gu&i5(%B{GpAX1O6O z7)R*sQx0(~arLqLHc%L^6bUSe&NguX?3$-yyc6NxmZPx`(inz=XATQ0A??959-9G_ zhc%MJty8FfAJ?i!>w2W6n+Xsy0Yv}pmv&HXZH9uakF)7s-yaPuFg6M+Y?oAF; zjLtAgPt-D~ClI|1>(sy@D87?V1ElKChB}AxgW`2vDXqrw-N&{NbHvrYuFHS&aq&-Y z%I>wyW|2SDgc6j3an-WJ2#B17e;{u7ymuhnsnow>1B#>|>TH1}E#v88Qn%ls91E?X zBgCIRT7EhjKobw-szK$>1t3WgFueVl->=n2mJPk(%c7LSPfgXPbRrk`S(n^z${O{< zha1&UbWjS46!Gc%KmfL^s|p5Xc&WC}zDP|mK40X;93{6WqBACSWEc%p6QTxTc*V5h zRATI-pmIGod*)0dC6+z7@fVbrsNj7J! zA+g8TxRxo+mY#cqhIZG_6>7UZL4XNbUNbPi{34g9Tosp0yqpiNl#J#LoL8#BAm5Hs z=!MS6Er<%SFO{Nn4(+hEI(X4h6P%4g-=|;~vJ=-gA#XnaA5$ zoV+tOOSOg6x$0MXmfY27FmVlm242~p{og3R%CoDsXCd9#jCF zrPKXJ!1Y%ljO$vwxCUk`X2g;7UM!NS43G#Ii$5jv8Qb_>d$Hm5lMV1i|dabgV@WE4gjEwXdGK zZuwhtPxz#how(|A?uRHGMX-tP`{Obtx{wF+q6j@3`3MG`uwhzdEZub-F2}IJRPz=3 z3#1QxU^4X!#XOu{*N3o>5NL0z@W!y__c}+?K1POLvKscW*^YshrwvpmPNfzx^c5le z-z+qO&dO!?!pn}n{xE^wdOkFis@_9cES~#<0#vP~q_k4a#SD5PnRxynd-7cAt3Y>{ zSY0$6acG|zpXgM9@IEB4#%>s=QdV5qKg5dpL||OF+V!!Tn`Bj19UCD*3C?&fwNqX( z&@TawM*z$L#>K+gBXvIW5Lb#{pz=(B662}9{1naONhJ;rV>Yb8e$lieMkIJ-)rAT> z^IY8E6DVOMVbN4`XRSWk)T#tE_PDJqaB!Z;0Xg!o5tu^^glMp1^4(*6pfz`7jVe=Z zYALzGue77pGg0?jBzAk3>`iOb5YW;q@wSPM=OtW;%y;i#>%Gt#^bPcBX6h`eVi;5% zw=>a|Bk*`=*nE=a_Idp2VQl62%eNxa%u>Z`)KvMPy=lkj7{X}`$g+dbEgwjEkBeSU z3igB|F}gmd{Kx~52n^1}o1YMA&YTtpZfzp*sL!hQ3uIhIDauQ!bWw^na(Ra<(Wen!tG2pt)Ki63l|cdBF=?>3+~LYnLInP+1bsgBfm95;v}~iEuQ1XzM5Q ze%r_iwk=pRjJephVTg%8u`aZY( z>OEvS3=C~Y*sTnLLGV$HUeY3kYRY>glJKN(9`R_g=+mqv_&`VQUc!zbG<&;l4<{-?L2+h<6>- zi6y)@iPLn}fX7(k@h%M87>OP7@GKTsla~xyiXOZGizqJikl#rEK{#!PiDAC&JX`7U(_TeCYQDRK=_* zPz|&8H2b$?Qz`jasW3XeJ}d$ezISzxk?RDi^wetOzN)n^1uiJ6^Q~{}Qp%;PuhDrT zB-ACBYmMF+qthhlR9Bs7j>8iws>v_@K!dd!6*0^uU!6-V&QY{R;NB(JjQC=M232Cq zssKAi*)~eHn~C7BwE2lQdvOq(GM3$jkE8HKt>E`bAV-iHi}l5`r>l zZ$$V9->EjkSbPqXM$jPoC?1Mm`8bN=N$N{q5GCah7(D0Tv|e^Z+IO;5h&4$)S&@G9 zxRIBsPk1V8orO!EEK;v$wqQRgp-g0~X|GMKyFUgK9}Tj@@0? z@h1GGZ~x9@!*cv=+$Y3QZnOk`U(7dOnF#YlbZR`M3$%;yT7Fb*Q6dCwaCG1Ns4VGH z$(rh*W#_HLT`21;n0+hUH1UH)@}Bo}prC4TyUIK%r?7SP-J*MY^e1Vw+mBsIo#eNl zA10~|?mr!hpzmR8Fwi?pFw`C^C6h+nv0!Jvks$urH%Z606#(t;QG1&Fv1hAVlE^>n zF#Ga3=raB~Ia1tu21~aA53#>Bj5i`eBUcYTM{;FHIhLk_M4>WEbjqrTkHzo)RLNfC zXX@L1Z9nEv(UAJOnY8aGrp9PeLQHojQIQz$@42TD&Qs`lqi>{oTXOCCT8Y=fs2L4M zXFQA}JepAOZLYYc5qQc@C_Z*o%RMy@qzjq1LLr)>iVd1J>x1f4u6MsmXpQ&SrbkNE1*zuqkSHI0$ zu2DgC#e~=GqnQuai5m$!D$q$BJG}wiDGQ}=R6H=3=C-s$x3}61-7EK_YGSp|!JHg7 zk!0nPmX$n(ijgV^&3nq|m`3$My(Nx@@7bY$!ww3k(8ezFaC;3Wloq8ODCMP;D~kh6 z&67p*qruu45-~Pe_%TOGGHyr8=SwY(PGKIhUAV=Qmb6ylz$|jhyYW5K|2t6;*+ben z2sDg=iIO$GM2uQW?>l38aK{)3<0)3Mi|N5l7`{Rq`;f>)VoB=R(}Q&uvdWYERw9Sy zvx;C|7VV49u>GW{_N|QzZ+q~lgdF zB60N^54#BZxj;!HsXp*K&VqLy$jy2I7Rz=A#e0-@_?|gA6Ze`rqeLt^+%l3zitdEf zhp7^uyF{HX53Pwb`;T){YJEeygO1?P1G;#!Agg~oAO~;BdPWfD;G{6kUq#YB$Uvnr z)yaz!EvDY2`$okXX1dh>5?3ssTf-Xn{*%NrSt=dvy(*fjJ9)s7FOrI>v;L$_ z@)$=p_}txQpN>a4(ROSDlv_BQ-?xwHA9KJ5mLHO}DjiFcGU{-c%Cud^lQs6&h6-f| z4$P}3`gLYd$-o)KFr}?r@~KDVy^aHsr6$a}LBo4#NqL08Ur8>0g+_%1dV4cY~n@BmjVNAFFm_|Rf|E!ojkWut1SqbI~s0ng1N`y3}2!j5W23~wS!Pk@pmJTqRZLf9q) z>KIa+rM{FI{(!v)WPErIVxzK_(+Ja7D|wMdb6txIee&i6SqVn9*o4qQ>jCpSEm0Ne zQT1sN8eWS03YLJCV~hO>2`kr@;mdn)DtgPvKo2w4)pst(g+3hUvY=X~dv<{eN;Ii5 zj?#*oCJzZDmJ;O~rt7TF=XR9ar%7vf)~Z>iqHYVJhMLf8np}}UGo1)vi?gu32B)Pd zTXTMEtsO_dLJi&5Wmyfrl2^2%;Ti@AMuJJoaqcXyx|I4ZhFFA`r_sX>B50oxaKyJEny986EUj3%7Vj9lg?7O|P+?t1wnPoULs3 zmPScwEvvHT<|g#+6UM@03jDsDl6PFvhcpsuqQhzH3=D(XA<#Q5pnjZk-RNGK?|pG4 zW`3C7*kB0{+5yxLZ_;iOr7E0+==5f~lxAUqLSVmQhO#7@Jy_XVIp@tgS;GLgmEDMW z=U}VhX{aNnqNCI#M5SpdmRW~fF`QO-WZZ<9Yjl9Cz*nSj8N5NeOcV;*mKTpeko_#~ z&;q8IsbG{|bXZoX6CC@a)dgdkaU654Mj$B2ve!AXf8@bW3MmWb2R^x`=T%uah6*N6 zVm*`|%g!7{qm}4@Yo%<(_2QoT6-6t*4BVGSmKzE(##uCEYcOUdzFU7ujX;@dF$_>K zYDbJ%fR};F1VC;GYcExGU-Jp(iZxWR|L%B%6Xyc@=f|O(CLXjmw8E#d9|(kLPuBcZ zB`g#!Q-uf#&`h<2>nRJr_Hp$pm;&$Irztx%SN(+R`>hk&-)obsx`ZmkT1*OmMH-H5 z&3qQ{fx9*Gkq z_hd(n-2KEg$v-bav@^)FuX%Cu6s;XkgF4ucInVBaP#vtymGmX=$fj&Nnj-TgZbls8 zsq)CMX|k5=f!OA!qoTd?=80O*BMl$OZ+X^x9Gw{&*=97yx6;^ zvxQgdUzenQ`nMwJ+T`R{STplPzL1^w*W@kmL+7^B8aN||vA7cXAol|ql}XTBmiDsXIy`jfRs zT<%QpNr0%^Cm(t5SZeOHsgm!GYiON<{2dygCW2foNn%k|YdI%_Y@)8?jE{)UDPNR~ z>!)0Rn_{#^=P{C=FlwFl!0Nk*X-TUSw-b4|o|2EMj+A*xK@lW*PJNZxBOk&+wSFL6 zc|IJVh5Y;>QS3LrA&y^{Nff}(jDjOpr?xng@IT~d^!V4_nq7$HeOO5PMm5y~FQJZ2 zUN6;MExIq*G`%&A$BRI@pWaK(az+dHd?T!@Yw@`@h>T_3&rn*Wt}AIoaaT|@dioZP z$6KdV#vs+^@~>)6myA#C!1|oAIL9D7;2) z1tK)#V@gdRuPv|A%Ms#w*(bV%GoG)$(p6@~G7HpCuk!-o$IJAaJH5^v5}Ly*8HI4J zY@Cei<6@+h4GeAzN7TP8X$6)Rsqzmv$)laVAJ7<3lngHVG0O3bErX8X;UZe^lcRs* zLiAx}Odk2JX5-Tw&RB?GxEN#3{paqH%MdG4fAY)+q-Q z1HmdD1%miFv?NV@%&n5O$#GLP!zz%p2w6So2i53<&%Jf&x$WA0tL*L%%i>u% zIz;@*?9_6}d2ypM#8j5)WvrjGCeG5o0&c|)E_%u68@6}nLWYlQ-TEqR%H>tk8#Xl_ zgp|;~e%~vX%quT|)j?|bo^n2?lVObKeR_sQTWsYM@NP;@_SrhmMuYvMy~5tppvDdP zVDM$s;`$3r7|dym5U1q^WQ&y5f4Voy*2Tl(_ueSsIC1n4F+`Zlt^s-gZLNs)px;V9 z!VCGwAzcZ|`SGrzjCx!pmxxbJ9MsySZ3C#k{z;lYn!PjB<+nZ&HZ zhO3VrdzyQrF@`V3nveC(jfiW67ERI)}wM>E`oj3_4{)8~g zJxA>;rf;aOnrb}b1lMfL1?~K`ZSbe->Af793);zmU}tknHXlb9NNE57LZUt{W)}9A9u(%5*0xT8h*-}7Jl{?0$?gp5o94B5QMZQ#KT^g zN>@phLekmIk^;yEWaD6!_ObQiq7p%-5OM=sfz+jB{y_mb5~i~8@NfaKvwM4cvw3r~ zIlEc2a|#Fuuyb&+b8)dk5UlRLP9A1HtWNIKzbXFWkg{~QaD&vdt+NxwZ%#9FXHO4d zDk{i%ihq#*u6Ai3$lyO5{679e@9qI+mxmkxA@hL%*ts}3_*pr)Sh)n)|85UCtEBW# zYbW=As0h)M-N(#@os*4&-O=&iEZjY$z5eCzKecezgltk|SGRO`_H?tbl=iZ8@}T~^ zQx^wM_rLq}bhrFH^e1iyuoXKbs6U?nZX+wNr20>r-!fX;I=cL^_>KNM5^V8LoQtQM z!ygRTg5A==(h(AfJA|3@-|!x`R{u)Szs=|Ong1;ih`WFC{~P*0^!gLaA6+4zfLM6` zb}BC=O!Yfn5ZKwm77Y6HmYb8GkIM?o%gO;3;9><@Sn;q5aB*_5@^e{mahmY~!5sYD zf1{Fja`!NEvatM31tDj%h465gb8!lobMUfSK(fmU4=ZOkM@W0wI)RA_nb{8k>KPJBu1AFn$vO!Yfjirt9EIElCcxf25+I__GXv%q;$r-`&j1 z68t9x5FP(IWnp9HWNisqZ~xJ3|1oa+pA@p0r2v?Z*W8>H%*O|01#(#Mu$qG{%via( zxy&trKr?QB4uOAXcXzh(@HTU^6t{-d03^?l=K7Oo3Wk5wALGB{y=^Ri7Z(Q?Co7~u zIDnd*ydVw^5El>CU)iP*V*kCu{>N;Eey>4FN}#{%A@qBV0?Gd_L=8_D7YAEQw}02^ zKkDZH1@|}mf7Iyz$^7rIzpN#lU3?*3Y~!Kg?et%||4)E_F}$|5uyk^F{;xv+JLE4} z{&pdP#QfJ7r|mWv{#)SxsjmOuC7>74kl$8R!{QfWQu1bfD zAi2otxdQ;`gTG%;pPb)bJZP#NrdVI!kWn7Da#Ln!BVnP9c)f8l%Ke<^UOhA5Fy z)|Le27D_@vJPOMd;z5BX_pLWAM2dne=B9W3IUC0X0ACD1(+xp-03^RG?gY2N)Qh3a z><9AzZpBDlklsSogfdlVW|J^d=BZ&rQzC=YU|s{_o*J6OC^_Iq5t?QCrr*Lbg}4=? zb^r=)dZ(BwY7XR3sEEa11(fiI9QOfDnK)KonC`6UXO&@Gi1-hJ-vE+eK4qv^wqlBd zsS$VLX^C@CK0{eOns6WYWYQYOV(>$g-1VYEA!MraU=#huT*lI(>!Pwi<3E_B({L@n zMrxdXSwlHfQ}C42g-uQ5(BDr(U_r=wG*SE-bj60sM%PHZau9s3si39s`#dJA+Ze1i z4D+vsN9@cPGtsN4mo_CvPk9Nd7>9^dnjEb@{cS(fOeyeCBk2mXgSLWWwhJN^G3DHT z8r(W!9fA|!nudwGP+{A1o4v1Ap;{j)mZBLeN|Ix^Z?X@5n=V#v;Rd-8%SVsVQ-6RG z!r71e8F!m^Tl+)NAYxg%V_$q9ib8cdHacW3s1c?k(B6Rd`ClY5n10QKKIlH$jGp3u z%v@k`FxJu54+s(|A_iLeibAV0)zAM}t;k!UT|sS=eL&GhmI!_e^$aUik+H_KgDJ%e zt$%8`J=hl=Qh|+S7UGNY0CVMEp_v&W38qe%VmU19ll{t8%Z?Z>&wM4g-JW(13GXXG z0!jzQ!>bJ(HaIi>73e4Uhc@+b<{ivc#98WG*S=4oKSN`&u_z_@VV;m5L@O42k)lUs zUUjDFAqs5=)5=LjAXcC~&|WE4EPTO6hv@G{(}Mp1>(U@4`N~WP=}CgIO;3Wd8H-AX zs~u1zbvS}ZU1S|v#9U#VrV4RCn)>-?Xm82KgaisH9CqlJvJ5}t-|#5DbYD3d>ICd# zA-3Ekh(rCcIv5dYZ+SxTS4u)?PnwKv^1pHLDqQiQD)Yh#dYCfYF9=WB>UPNy5)f$% zz;w_UKTVWlV<-UIFhS5yQjDH50}zWJ?733gEzj9?L5zr@)v!URPr8hY5cO6OcVZ8N zw%z-nW1}b{hD5eGD;64PaH)t(oX6^5d&KVbxxy$Sri((!nJP9@e#c%*n~w5S|8kHP zgE+vm1NRK`fT6c02Bz+!&qD}dhi;)E3gajyfqyn)yj4Vs9JK-}T0vdWept@Hf;GE( zAiZ){s1C$NPMx$#OMvb<3m&_k2;%@Q07L;n#Tj!DEtOj1A(=*WX_1n|jEc+-y_|?E zp~|<(Odx)UdQ!iq4ffO{aj60XRQh0H7#f3^<32iYOT22nz)3G3mVx?Mg*3hp`S z8^o&@sAFt2@ff+2&|ocvKof{`gt2M4N7IJ0hqwV{T?ml zM|+A_KOum`s6?Yk_lZ4sS`i3qCUryI>rq1s0n6b`>TjR+fgplO!R3bpLz*QEA`F!@ zu;^Pnok=MrqMCH13I7XY3Fazl6}UOnd8QjTgaUDP!lKVR1Jclt$*aa&30yI7dCOqR zpb&=!t1FymKq#UVfcHP7xwRk^WpOw`J3@ab-rMw`2jIkH+8_?REysC~tymCMWRT(w zr_2rL1(T)OAcg^iF0z#)f6rRAR1Yx#r9%nAAc0{Fcot{8eGeITuP^PKEShEc4aN-V zenvflgQTXixAZYWw2Y8`788~d@#q&}Qbs4%Y6NRu10Wb-2r_AL#89v8%S`pHbVzU# zsjs_4^B-#|C6;cwX!1!EA_n4e-}IB7as|ZSya6!%SLx&1-&m;a z_Mn}D-(}AhZ`ET0yv&9K!4V>KNiY?ZniyjRcjdSZFTf5f{tDI@p{G7$4U(kpQoW;T z8}0E&3d8}5>nPE|RhnkPz|z%if=O%v3t|$(8kpNp%01r%h*JIwC5$8D>;}bp zL5uj6XY;K7)&;?&knMdK1u6`}(}1rs7-ysOk(Wbf?IQ)^;HdZq1hfRAomXXYz-;F? zu{ID!r$gxAqswfXmRPD;2f#CyuM9P!1h@&~;n3z45c9VA8;0{UI@if-%!pAi-&LXw zIX>xP^EZ{1Cju~8A_ZdVByrzzsqe1!!uW_u*x^SxOMp10415EON#fERuNot9u-fD$ zb=wrglb#TU5O(6QV3`w8KiLvm3K&L_0Nhl$XJ``|bD%N+e(UR{KAnYV)ci}+_1bE4oYKA2=hnuQt& z$R#U*U;FWnH`vGLx>w^WwgGwwIe2Fq7ySoXlBx9fvZK znwmxzN~{;4#ds^6sSLRq8f}}GVJM_xWR3E1itNBtgBHD=xy!;qkH#K> z8JUio$C~A_T5rGHI;n==*v_H2t0zyR?VwA~D@^7kiU#buF{B3`qE#yp z#c!WrO$55xB8b#x{D3SG57NK0xL0l@gPvtMJG1n8Gp+3D@k~SrJyHf^ zWlL7gO9aVv*7wcq=Bm7OAX@L2kFn~tc%AFJwJ)><~+=>i&7ROR+n{A;OmUv5M8=WCiI}6lJ17^9o9UKBT)wt-ea9 zV+f(I)17m0@w+~wHe{Sx1iRBJpassmJ+NHlb%YRGvC%D&a#LA+{pPpg0mhgvSXIfk z&{bxMv(V@aJi)e7$ze(aHTyvM$T4a`s$YNg3D-w%ZR=`&VQs)(*865)xGX#fjtto~ zYu7#A$?7EM8}%XbMl;LQqTxQhbGOu+JEK_(i`p6)B>hNAvYgG##_eJ$(h*0 zQxAe=+^Bngm*z7T(Y5weuR8mO;tS4)DD_t%)#A*F6`n5#6=tsLBEjyz&O6?`i%BY* z{{C%fiC|^baOs7?jsusW-i4v@Yfpu((ViYze1Da7gT-A$^dlf(CGD5M2}eD6e0v!?N2y};SMn7L=~VgsIFoMM z!BCyKLJ6&&tvA6Rjm^9@d`B-`e**aui-w05genu?h`kmqtcHA1=Ki6wKytC-teX9k zOT>~b2pC{@sE@ByNT}v2Wat<7BJ@B2l2~Ya&r&&L-Pq;AG^_`Qd$=F{i%-~?oql4< zt>h8A;Z0yP`~liy_dE`lugz+GISuWEJjYW3;8~xsO#{q*RXPSAgyfLo#Tx)@Z@XT( zvt~GUKs?A%!_6K~jBL>b1)`%5a;iFpX5H}?u6>LROvG!obw)%pa@~p;uFJX5zOf|E_hgT)vs2@Wqter@FP|7-rJ4)5MA9XJn zHKo1^QLm8!b2m0WVqFQ?nheTn*mG}vc?KRB1#Vr-^Q;FrePf0E6nfVSZIe4aD~Y%8 z%g4+~PwStB;iWq;k?tLyvnOVWds*IhR}*RNKUnY&#F{u5ll`+dq!c7}iv!6Ure(GE zXoFh!eC_I|MDC~EhQEzo3(9VkuOS{l?kbL?d12`DrU9ael!-hv zHM59tnPDxfn8-PNo10J`MB@_|kIb_~DgDuY)W}kzQA5={o1&qPEXT{jzV|EW=$j4mS7+UPfz;&s6%L1%$J!IS$FX+=45yOo8oTq4b88sb)CW#c`N-QM z6$@6WN-u)RB-`?)Z(6I&1mT|!6SIB0FJ905vZA%RInHdyAt&ZL+qxfp#13kG(drt1 zexElVcG-IwG#CAKRZ_zq#U$`N+XJ?9@n|MaMcH6%j=Fi16fc@W3BXXxEXbMa5s znCYFCQ&EP&(&!?kzHWu;6S-eusUmyl_EdR02M>|l4%LFh9z{I4I6gj$q=RfkfC}7} zIJ%p!aUWfB?npmQhmiMvBN3VBqk(=ZB4vliOKfH#)AQh-sG3ZSiG9n-#ups8QNOOu zLH#ud;fwu>_m8&)41?34;9$ZgsXB+J%mPt%N1F#58{6Hj%^=3F;rCjE((AneFio7P zJm=L6RUxuE>u@6TUPUd1bjEAJ#kAxygS%teF5qQ@&1Ylu3LycdX_Th-nwT+KKtSk@aq{& z^~bf4*uU#{`BKnxX@gocyq>`Mc@(mHC@gkah<@bU?PfKz{U&o#r4(ArNj0uY@s7*= zX&=kLRWR=F z>w9NwZ$;_eh8~UA;k|l3)alnFcLG&a-7Vp=yw73U(4vEkN%9A?X!mGe-_`{CAGCMP zyE1OS*LA$=zP-N}P=RQtu5UE;a^V|!!l5wn-O9=Vo?O2pi<3}b83-`@H2dr6evA*= z1k+|xhhTI3dlSw94N=fZ=*JB8FH*cZ-eOMMdgUaOjplmzTS1R6{Fi8kv=oC_6W&f; zMIAxth&JaoM@L@@kKQD$LFsv_+m5=8TwkRbyvlz=u8HqlBr=|i;oQ9Oya*|7E-&b; zil!%bv&}&Ck`FJRPH#%v?$|5pjvsN@wsyHQ`X6iJpMDiuJH9M+wW9X`>>5$NeTb8@pS?u~MGeUnUPo$&rc(Hon3 zSH^Iu=y%zhBH&eGb477oRndZP5m;!B-M-a=}g#t zfp;&5q)1s!`C$?2^~7$Qr1Q5y$p*9}n`!~CMVS+&tzmsfTzP>6%Hy%P0s+HYy>Sk$ zOUOR3a%o0XgWEkxw=`y?*8pp`?=jv4G{==lJ$!L!CxS8q8Qp$ zP*CT)x-t&p?}4l0cMAskN{>(Wo6e^j%XS8~x8_viM6*eE?>`z3TIcO}5-Bv<75s8T zYtSZv)XW$X9w7}>^25(nwW8Gk!VbN)p)2?D?=)+85BM4L_RS6>PmQ8CZG#_uFLu>h zi9QItA`oJI`uScIa*yn~$lE?K+5Gt%#PR9er*uq$;Kks3pYEsSJ8OX@FUZwx`ZQbl z=KJQPuC|Zj_GB)9CcpExg&qH!Gfk}!)tyuxVm$!z)*s=fZrN8AZ}^ha;?tYM`anB-mmi}U)fd63XVT9>A4 z0_i7=h+jR)pF5_!zC11|x=Bdmo7~;Em+`hdVyCd4n?B6yyKnCv_r0Ns+fBHNbK+}j zyBeBxlyp1&oK){w;UL&`U-NR|_CYs~$7{5Ax_pQQ^23SbT|u8ies*nA^7s6V6%R^d zPBjB80^FP*0aeKBkr_bEvHI3gsxMnxNg}H@_2d8%Do4oX8~1WXmww$U^@Is@72cQb zbAdbrdk(QzLR!tyN$;2mV7W8Q)j#~YbS;(h9!|qamwSgLo0{4uMV(rm1U~%0oqIY3 z&ZCZ$m!aBA@NrAjZMMK3bs644OdG+Ytu(p)JYoIAXLlycELo%K?%P8?d&F(w=L}$4 zL1;DsgQI<4>uZ4~s(46FH_Xg*nGkL2-9y>Oo03V04Dww&V3JeDTMeO#xL4Hlwa7dU zpH$~cUq3hA@Q`&pzh{2-7k7&~_eUm03UK_@Jrp5;vkAHXDbop_#&X`?5)t@yf@L^; zxgk>)mw;Nl;DA<}CQ&CCcU?UHg6D7k)!O1<00Zi>%%kQnx5WUj5U|!S5EH z$=NgU+}TVH|BI%t42!Fa)rHbxMGAZrhvE*!26uM~#jUuzyB2qMiWi5$-Q9{826uND zoV$C@J^y&vGnuTc$eXNW?WPG@J1e%1tg?1TE{7;y_*Ci(#0sX8 zUKA(ig|_r06-W`k`g{U?B~!^D1mjyR&w`$$s_wy*ttpOtW0!v^Vw~tRAmMHL z1P)9pA{E&D%5_;U-mIwb=QB62%v$)cDuJKV!4c-Sma}00Cpyt@v$40DM;Z9m2x7)9 zRj=AzYb5FA_qKds`W>G-Wwt-2KUfw$mUsIjUHkhd$~*$CjJ_#LYpR5IJN^2akrjC@;XJ$1!y38=mV?`< zm02mucV=3t1|#9vEi1d{u?32NSj0A*`r@XgOtECT2}g?qg4e_VmuD_&(*HU*lM$)@ z?B%Evc-pDEc439%8OXqrJ!duS?fcDQ8De;d%bXUVcZy!vu`8W%L=aJ?6lff#d$m|IA1C@kvFbN#1~oD8qG zPa#56Uf#LY#O~6#G}eRnS^*Q<_v&nUtMedNQOFudl0!dGdNre@Xv^nn2-*cLBLWr3 zS8hwz%+P>#(ou8gI4g#h1+W#O>&pjxqWn6m%p>wg&V?p@P1M$wSzihbKQ+?mkE}Ya zEytr>s*q*e>Dn?2loyR}keEC*(^0{tKk)q*FO5*j;wu)LHz-?&M{qa1?oE zQ(GXh-glUamK4KAWup$mu3AIfRg!SqF|>ZY6s!y}^EASo+3Cl}<9d0y8BUH~@Fcpp z>`}<^eC}$DkNJau1xB`RB4XernlUSUkGG1AL~*Mz+S*wP8%l2`L)p43aCe|;(CueVP8?ya!%T{}r}BDvI?yLJZLK24u14^J zZ%m3p@AmX`ijlg+$NpYZo(2w9jEnSFlA*7{M!BYnRawTd=wlR1ZG2m+TZ-T3E)&gm z7gGCpsUmF6%NKnR4>>7b$%KVVe$~}#`ojpzB^PvQI$y7de@U zLE1zfNwS~?JM@3i`y9cR=-+yjD9T&lm*kMaLd%V_p^wifC4D{PC&TTE)$)f?bi5ru6j4zm8nV4_U04(@>e|y1Hbr9*Gd1G_u_XUZ>nzvnay5=U z?Voc9$KxU-%fuM2e0}xmHwO8!5x$tfOo!R`{^A<`5&*}jT3_FEOIuM|`hf8OQ7gjP zbaaVLdAfeJX(ay*c1Dp#da2&Pz#)*Wim3beN%8DduWWhKzJ}GvTWrwM*AycJd5y!_ z)#b`G=*6kpiYi`*h0XGVJRcjT15S-v|;1xGqy0ew&ZbT!8yMo*at2@wobR>NsDD#`yt&;m=UVS+)^Z_(@y7<*f;o)g}rR ztc0}WEUa&}t>x~*xka7~V_O4k2-Z#O+%EB1QslX?=R*kJ0V{9lCKAQ{#M={Q{y4JJ z?m&SbKyB7&Niub^G_!9uPRBr{DLz;zUde$zt{XXfSXf(>UNX7lmY|a8C>7Oq_rT0l z5=q%|*XNtt<(`xg!h_Pck)ugJt!6y;$w{*&-`zF$$z}ygb<1O~YK{=0>A||gjojR~ zo-1i>dL5p3ASg0yAMBfulGI!O7X@;fX(dZjjoRt$p)5j%&b$JHQNw<7RDpE9HnZfu z!{|+S;Wm>p5c*k1@P{9?I9;~Z8u=TY=;ck`DS#83SKgfwa=QK9E>z-UHzTqb>5<&x zsAs>N86aUC;>4(!x zK}j0!@^D8q6+Y>ivqa=AZ4|n>(ZkWCvO`=v5}C4RwYVxSV~ELkYV&+_;0mG4Y*?ru z6m;rNto=%bbTh_649f7~uD7uQWFVn$(a~6|; z+v;f{w_(W56R*B_C#FnzyS#xtZ?P627DA|<^|P%a=8(#KD+uC4tU zeM)O63rPjm-4Jq}^mrmWTgbDz!t>TD{3O6Rp{-z0;p3`a&i8D5oePu3V@sx=(baRO zIVFTujvZUo9NIpem4djcLP|n`(Nf zH=HdE(@5X43oFvj=0P{NcSJ^hA98$V;wp0Y%XeJ^tFz@sVGl=9l6HW^wR1F6-bA>+ z(BsE4UBVWE9o`+T8wzw%BOHw^PV}_l+56ZD(9#=+f#=y#^aN|vvy-RrGvn9YkSCgh zZ3ACj;OfH8J9@2)r8BmL^x>!H>$KqCJHCE`eiKVQ4Wq!Z!Q1F%n1UA1^8I6ZSH7l~ z4YanM%CGB+m}&?6?%{4_MC|{10XzV(Ix1@-dGH<1?Y64HR9cG^%B-oV>`f*xJH*^5 zFcq_R_3ONi#SNK6ua(#cF$KKlZjaE#yo%$Tdw9m)S@SB|-@M}N*?K>rU412CLyHRs zeL<0Ra_T`j(*##dnSIB0;oxgMY)WH#vYOFAdA)p$jbog$%+KyplX)?0HmSpF){7Cp zacs~5EtfUspOEiVdT5qlO;T@7%G3#r_{0-R*%L!ZBFrAR`?G~mzJ;jN{uyP_K{EGNjBzcU9xE!Z>QR_(!+QS-ZVOw zfm-^ePMp?7{@S)m8t{S~-JiM9rlS`!nE9`{Tt*@Lo#kwU*F8V2q@&KCF!aQAQHSVc z9$sp7-BwWnQcj2ofvfy#y3C z6hzx~K3YML;!}ZQ@CvWXhsY9Vbfy}H$eANY*BpV~@?>gj9G{PABkuBNOnSpvy@tqj zSC1=Qb>)#}cQ(88Wr>aAAm+s2#JA>CJA3ha%XwR zu2<1*<;#@1?cAc)t!=+QX0DFoC0>Xd_ojMv89_6ax$yH?bG6{=jCUK|-Xin?s~ zJCk8mS<}{AVwaD}rr-2dVPrJL!IpQacXrvRl`|w#NHE&MOU#fCbY!9RlGETAm7$xw zzR!u)IT8)wZ)q|oJ>F+bO*D4C5R@rJ2>qAA==^#wm1Oj(!`a#83R=xCUDU+b9#VwO zO&6lb*!DAUJWF8TUMIdq-248umotwU_GsLRK$pV60wb!p^u<(zBS(!uwvRc|3Q{hmQh7*uwq@6`wlSiM5YqQL7}+S!;kA9C^?Z#AO~}l_#&D>#vfRk57L?A!SQZL* zNBLZc4R*ca8{Eg%yMF(DmVL?N1g5P_hHF{JNj~V)8AI9tQRHszDb1!`va%qd<sSKJ91AMI38alATc!d@ z{>%|d`EHCu|Fy}RC^W29?!NwLmh<;gWHQ+1&$i^`fkY z4K{~OG+}jT_g*CRx3d5|LW2jnSEoK|X$852M5;#*S=t8cVPkSQI0=ORtHw=e2(Z0) zzIuByt9IV`vh)4S>)vI-!%38p@WwA<;=XWj~T%}5B^0n zDuaC2R$5q4+aUAC+hn^5Z|!x#Hjd-Aj)yCo%)z2iz^mS}x6l^R%k$nJHHbx*l@oQX z`;j$p1lXI%t8Zeq{ZJFKXUSd+KWkz?V;Qnwnz>N=i-gEHaC4_;$ts=S*HJ80>6?rL zG{I%OlxA<}NiT2YCivDjT7v$!Q^uSJe)sGjAF~c>ruk~*736`rzw81XM`#IHn5E~I zy8{tMuuz}ZabMfC2CG1a@2eLiGD1;KWAunSa(5@(*V6w|rnJV^;)c4eMOzGORYR+;`Z4PC z?n8KsdKmnVt28oSrHfD6u_q>84@i@)AhZ_asxi8)kS&iONnk9m%Vo*w)wf%T?oGX zf72(}P);qgdOyuh&ySet-QyK+AMF{ytn0W^V>zCNqGzTIysAig>8r!Bdv74*@ z%qaHkY&~4mfZaN&*+4iB^#xaaW#B7MYIXVpKsw@&S*cu-oXn?7gCxwv$jyv>_SYTS zf_Td5IAh@7Ru;zRhXs62u7bq--8U&(R)fE}&a|E3T4?bwI7h92Sys-1Ew{JA9aoK{7izmtXS@4?5&qr4TW_Nh2`A3#wy3>ytHC666TsTLi*?DkFl3KrN zfFB4i$ip3O{#Ci%ACTy*sT^MC`->{lSG%6Bup8!eepeT_R>t=w7iC#lb#H$;xJH`Y zCttN^W8uy`d=NJtq+p@-vzFSm&u2Y8IoX*TaGG;zu&=n{Gt032ed_|xXi(5Fc-TNA zBV{6Ci}4_R_+Ozh3y8Bzf8F3Zpy|HL$?KhMA47BzPD>PPNJo1YokLeT5H$71*lh?d2ndEfE%rZbmRgyI)^j_ktV zLg>P?8rb>UFI*AmM&Rj~JJrfiD+yKgGuz6cc2T}aJl4A@U-$ubT~T3ZBunZxXve8$|yPZR&c|v`ubRijg(_mt&?$S^)VW5@iXRBrtV!8@c={fh z{h&vW*SMFNu7U6+@JY45as<-I9h|a0_hsXIiD~0s7+qZ?|ixG?a$zP(;MD9eSfbJF?xN^#>>50S0eU_{Ml9L!Gs_5m= ztKRuS3aW2H8)j~!n;Y3|)Ra;MLG!+XC>o|tb^Yv*rPifQtBu9(DetA40|`nPYF1r~ z^OWDQdm@gY=Z%^M^|Q53YpE<*GG~r+X;zChC*)V~LBKAld^Q>>l*0H#!8@S>KqYMQ zoo#}FDj}>x-yz#bHdHecTAhledUJ<gax_RsyIESt`GI~d`NMh;w~NTn$&@lD`|fT zbRSQ$-vBBuhdH#6fW^M$XoM8+MJK`_!KNn}>AyVn7SSR7IUt=nkMuPIDd(6bDa|e$ zx7k=LbiS8DTmgYktHtTHjwGDSZ)lz;tg3;tEDrJ2p8s;9z77~RCL7gzXF!N+FOZkEiA2Lz8{_S(cgk6eB7|c8k3%- z^2UzyS}K$4l1(TE2UL=HLo}KQLN{@@K4}9z6FR@Ee$_UlJG6W#q%K1F8|d6O@8wzh zwV5YnISKh4h%|Dqyy*d;S@hjR3JjYfn~7@!u6>x2`L@0+CgFG#_N-}_&DB3!;;OQJ z?9M4VTRae0|4x9%?=-IR)AHY$@$)+yLuk>pdFWfj8$}^KXOp5%cZO!9gt2$|u@wtp zK#@(Dl|;FH3Y?<;Jh8Q_9@VLVb6%h^^AwT0(N6Y+*7Z?9_Vdj>$Z~v=-u44sXwU1O zOy~8W${YygRVQvR#q`zXn8c$z?}UH)zI2f|hpQv9#I*xNW)AU1Gs{(z_DO$<53V*!K9)c}0<7U~Z0Nt*t4#lNj{RI7 zjSGe0+$t)nbP?-s2F`eg&~4CK6%`yavNZBcxTmQILjiK5rt$Rw(?j=Jfg7o23PIFs zuVcT~9Pw$qi=|5rKYF|^?}Z3IF{NgRyjfQ-ID7ENuZvnv^d8?^Dri513b*+>IW1Gbtx@r9 zi+-TvRk&)CxU<|kS8Hr;>PbJ2NLLz3E*ZFX%Jzgc(>!64qD#@5Ewg2YN`g8>d3=(D zLLfn zx=e5k`g-N1wG1J`AwUwXyutI&#(!qU>y-o-oVjbZM_~F3Yy1q^wfS zJn~xroE1Mw2q7tNc*VBQtq-4;)^zz0d-kmI_k4`q(EZ2V?Kr+5x1pc5nK}+CqkMCg z0;|ew=xy5^C&$d;AsGit`h9bAShS|}19uKrO?YJI1v6D=?S>jvZ1xvwRr~b!S~~K# z2dR>+m41~;t#4OdH=Pxklr{!8%4vzQvsRl=+-;Mk!J31&#DxIl557D zfM~A_!Q@+Qg(YIH*)2d40U}u%Bbg$pse~s+ncc>kem`aF9Ou`?kY_ISvHBvsPEu_f z##BZdcU32qBd0QYzfUkIX={EN)3w7jCstl_@OcYQ)G<;9mrHJT2WW(Oe!lpOKVIb< z4*$brPy@--y!TL+om`iPtziMJC~1keE*(wk{OhPDhmC-?Th1{QA2g8?`o`0kEM0VN zzgS-+$DGg;O&*YN<xcG48=j>S zyIkzlhK84Jx0P)PD~;4hLU%~+>hCXjlTxNyUzX&Ox0ws6C=;#vr$v4&^>1$Em6iiX zolMIWouT}>Czd;2PkK}jHVEjGek54e$l;OU$wKJl**Q#Q_)M4jZF>0Onh3pQQ-o4l1$H}FSAXwj2m;`z3V2LO}%^xs(!!X+Flp) zG-$hVem!(~uAP$~SS28+SYV+n(Ao56a%G5_9-r1n^l>rnzuYfOTZjHg%Pbgy}HhpovR*`d|} z^IrJZ*z7>wOd?@rNB3S?f}8q0iYohZ{r&MFy{3Q#h$%G^%fzhorXJqgFMrec2p8|C zD{4Zx#HQTYgDA89!yKG4mJZAYdYk_CZp1ch6XoMZZ5zqprXT#j-pgF5or%|}W`%Fm zj~Oktf#lxRYqiAWBpHba_CBjTpuXJNNcHOI&S}S?AN)#)HD(BuuSn9`LQZa?++t@_ zb3L+TM+q+YKPFy3oV0rFz}&Dspj=E%e_byYrcHi5i~Xau52NbX{&P&HqIBz+_tm`Z zrDtTZ)^PBdK6v`iOut{}17DG@eA-Kk&}`g>eS%GYErT57(0UPZ+ESN*gjeY-4uhyd2vR!bFmj$rnLz)qe%HAMO%(@pAQP z5O$VLuxOs}Ms|Q<73MKhyc(65ray_f?XUlNX}@Du)!9+lI|r@!%_Fi^a^P6fJ-l?v z6CPKbLq8pjsV{0ya+#W7E90h4K8LGF%PJ(p!Lc`hoHoWCm|WQ%!Bv4XF@tV)@wcf+ zF%)Oo-f-LCk82ZJURSHpijEa*7v27cmw4VLFOs|YKkS3qW`bIEXd8~f*1!NV?U2@i z^vJ-qx7pNnqdFK}p(O@;b|2FDHutlxUDY*kktwI&9#!s}M9Gk`f);G#kRmxHRZdj@ zf8bfRf=8}7(A>AhWib_X-lxgzAAY)cdQB`}eByT5G_%>_DUFK1TJ`=1c|pq(7(_?! zvTb|W1e&YZHTQzi6YM2gtp&=G9z>kied&-K z0zSGc5c38||J-gVIre;Q_hJjxqWgVUuRl9qJ$3p+UmN===6ha!uLW*%!1u9{`g-p@ z#WWdVQ>GB*W{R69je&(cP^?3k%-L}F`EVP8gN~nTl9${?;;6rU zKAqvLPFRQO)NFRyDj@m6vt2KT-UH(5m0(h1KZC+;03TvbNTKxpSN#I~h02C7_oRs3 zsAc*6;GN*#6ebV9Gv_lsS5c|l`L(5XY8-419;S{+QQNfhbM297TTTSoj8+mJ2nQ zvGuSxFfW;&6%GYG=~5Ll6AE=kY;Q*{Bl&$ z=dWJnq0Ya^QJU|5fPYR81)6&Yit1cvya$U-5>xT?SY}%H;Bz|eV@~fF%oqk*ls__# z9_Z5CXwkN3Ej9c5NDY|vgg7PddRy)1DuG1Oh*Nt~7yWQUr@J7$y2R+(=;uZCry0hJZ(+cgbUpW|4xBBH4#&zUG1XEdnHm&W&Ttj`U zShU&2YJ>Iy*rFa2TncEM}qdWVg-eZS+)C1mTCXHVXzdN1r1^3 zOd~(JR3=RZDA`m(YwqixWInl1-;Y%$G3uetUEk5sN@OB70oD|wOb~2((WaY#J5Bo_} zT;ZJJCM^Nq<~M8weLiN`NcQQu?^iFnQ#0ls)=-xEJ(aq}c%WAc7VFX;6-hb*mg2)b zj4YUd^gk+`nEg#|&L8hBNKcoC+@S5J-vj!l8aG~Coyao+vju0&mI5>baWOFD^bAGdAYg}=ZkP?N z|Jd#!x|dT41ofL&E;$-A3;u1wP^_8NvN|K?4LvqvNce^1iWh^%)+YU3>lm{Yr>WMn z;}SfA@+Ld|ru?Co=5Sn(zdmw3XBD#lyGT6T!`pDTvVUk$dLzW2sXBDM{gM@u$#mm^ zVmFc7!ymaQ6Cj^gqH(YpHpcu6yoFJ}N8u4m#>xkF_>dI8Jk*-WFAk`iXmUYfIw!D7 zTyK|@XY4N<=Am)q=(z~VfEVj*yQM9&QPVkHDq7)i1vO3RQ|4p5L!yqe{?5GgH~V%! zd7%|q>b9$8zXP>k=ooAdj7sL!H0>Y?dYH|6-D+FgTMp=M?|}{0kPkQD18hQ}S3`AP z77gA$u9PIX^-0HOObIRQf!Ws1=!!ZD3QS{zOQF`5Nehpdt5Y@=dL%z?Wa7m@A{y;J zM~H&_U_5<9M$>PO0wh}Y{F@1LV56wH*S6*4Ctgmge_6nNV}nS`)+*%{KA-F(0R+ zEHVgryb^MWeTfti%`Edbw*VDmae6jJ%J~#w?HoHkxHUS&*pmaPr>}IW^9#*snvNoF zJfbDQi2hSuesXT}N{+22DfOfnqquM6RAr`P8}ObPrRedSDy`6l>;RZwLvM-%${qoZFGm744Fc9 zt`XM30j6(UF`^%B=DRF)j3)egDt zGQ5w*U;Mw~f@=vVeld>?zV~kD%z3;b&?O@<_+YKDc% z#&#Cc$7sffNm&GVaQKg}+`06%d&(uDHBCii3@il(YRVvzjE;!jEN=_zUZLS)s5yJH zV@6cAwOmm{)wSJ!ONf1eCYmSw9%<{TC&g zM3xLt%-*U;b@d{Tuc%o#+TybM*#~z&i^)k7R&tokV}A!NIu%zXobC=C+TXq1faMZz zJybL~Jp>*!xo5(H^Wvf#N37pX|E^y!5i(?68is*2_-kQnwOiWMR8#WGm#zL^6C%ma zUIU9kZY)g!Tm0Ff&)cqC;SMd*VbKKaNEVeA$L1xo$30dLTrQV%x^-UPFU;ivN z2ZqYhe=dyPgu7N7^QJK1GM=rg*Ze9uM^kyL`6attvVa>{2*sa;rJjHKW%Z)DEO+iN z{Y3(gg2zZIJe+*tyD`@`US@{4p6BO>bnew_&+qHeuV5xE2&7p(; z_7A%UulcR#!Q7rd>QW3_7K1G9P8B848W5ez&DN_F2LDdo=d1%o{>vnN<*maPk|*SM z{3bz&lDJG-%UZOhOxtdujo*UIE~c1d3m8Av4_3Bjc4@n;FI-!E{1`n1?pC7d6T%Df zvn?wJ{lk;nKH*^ya$xfqh z)w8}4R?SJyvIC%)C0l>jagz@=kSRngNI^>8AxOU^EwS@e|`Ek(c_ zV;=z-YHD$%2kyvSCC<{0(kBH`i8-nj)OP)pAhEr#oY8tiYfdbeUASSYaeW|F(kG!q zjCX4^2bwo#clX^(PEZ=w)G{WRESEDaZk)FUkgNZbqw$-aX;vz1HXM#oq#D4M4Iz(7 z|A3XwYD*TP>}Vy28xHQFszVx7%of5@U-LS)smifP`aF$DjVopw$=5mW(^Ru|Q|M$Xv-i;{_(nIeXIW%Hu$JSCd1ZIY$5l@1m`VtwLeyfIPt z#)GCiUs?PZXpy98P|^AVAiMpCar2Vozps;g8E!N_@D(ARE2sxynV*p2ce~zY5q=>p zv*#h?@|r`HjZyp}NUr^kycgr08HgY2lel@)6qAY_j7z)QdL>j&$s|$4BE0JhBKf_H zu9%aJrhM}p_Y3&Yf;I?BhxERUERG&d_*3xoPw>NAXYcs%YwjVU*?Wk{39S@l5E#8@c{Dh#RuFi4HrGd}D zNI5X9EykwgVQ4-YA z6*b-j5dJhB%s?PYTNJ>C@htuU;46M?iJl^_p$cJ!f(ane3p4#INhTDUf(^sP=z-# z88)r%mj-^R8NTXjST!=oqH^2+b2sTFCP1jS|9wJ|GLbMF!5dl|r5^Bk5y8K!+U&3< z+h(c#vcR_v5MW<`J1l_D!s~+Q{`k#l?ZaYYvi;j$xvOO5WKt8*D-l$7{2y0e|B^_t zH-ay(L;B@`uOh-}+l5*Gdmx+|4Tk&zCh#pOUH|&>1D#JkPj|Hr+Y*h|OT7z~+66~YN=nLF(+T`FXeZR+dV-V9VM{Dr z9I+Oc##?|Sl}R_hxHu#d=k4w8jE}GEQEB@gn%ZCUeUvJaR5c1T^L++oc^SxyQIL&OJJOBovjmuO(HoB#3Gwrvq-Uf#HfhgfN)+}2RWD66TN zMw1Axy$dweY;`ULH%os`R#j5^;d9Z0uyB95UHNq0wQd9G{bZ?@SqVS#1b>3Qpb4Yu8`>$1~qTovs@YKX1n}O~KV7IWa(0+;^ z`qlg1KKD#iSQz%@@s^yGHDOWTFT1+yRg2mF&2={Cx71RFTFj*V-O19gwYTg2Df-A` z<8Sfk&+A_2{KTFIq`9f%wgV^DDyVbC$cb(~z$!>uOERG$J-QveJKldsU zhOt)3TtDu;GZGwiT5*fzu z^L4T}kLJtOY;CJJTu(I0Z6J-X+@-Fk%kwg_vUO<0e7_m!(tzTcnzVFya#|Yh*C%2u zEF&TTZw+S8eSGCgK4t~6Q-^)n-8pqvHWI+K&#_09eyQa}zSk3V?mY~8y zida7NTmc{R*<#dh-+ zkfzv^6Km6{oYVMI_q!!C55Oi%zZHb&WlheGazd>);NeQu_au_q?xs`Yb$Esl$i?-?TTlM@jMgiIwTCH2=DwW@UVHP0bbrA#32 z(r@wQ`n_@k`{e4;0cf0~qob6xG$|DoyiwFp6C*o&Qlkw7?6@matdM=Q;Pf93PMzgE z9cq0YK!7PZnq}{tju2yiWPIG&BO5R{u=NtFR)ghkvQ5Re0{9>Vbk{a^Wo2a=TH5I! zyoKf`0>8m*1pX$Yn*H1g**pVLcx*aa)TE>V_4Vwr?Yetgf*Oa*4dvQUA|CgPFQI5e zGr)EAWBeNmOAH?r=&Ufz{x(>pjvdRTv;KU%J$|yzB=Ok6So1t)OkL#lzOR8_fhsKX zD>axHMbW~h(AZ&^-CvC{>v}!|ZmQ+H^U1q&wd;A8`Eb5GRrWYbz())2%uDu!`1cjS zlUq_#7@IYuaiVC*?R;$&1L8Br*!0zS_*3cv5-RGv|5J|kb(c0kc?rh)+a6h`%kC&P zD=RC&J{{dC3uL|B!%5HkasbdZGBS~%Gi~jYhjm^PqbL1crY&t8Wl}B(SF(Xk=^Zm0BE@^JU@=4h}yTN2aDqgoEH?AgO?@&ej@9 z^0=P-4sgkdU3oF|FPXH4`D{rj=1opP5e(==YyIT&!*z@2ZCNi?_2hL9P3DJg*yx{O zcJ=1dy+uPMB_(x0(EC7*?b4mI@1qSGOp_HV9u)281X$D2+i%&=F`6P?{rHWJVitJr zYYO+3gsYkb<5IxQ*GpMRNocW3cVuEh)!&hP|Ma0_hEExw&?gi!DBk~s&u*@iw_f^O z4rt;G*hH3#>IiuB&VDegO9-(WNi) zs{w>;y0!5MFwU*k+PS;lUd~tb1nKD`o13`+Z!rLJ+6g=g_%sfWD_Ouux-mBT%*CdPC;#x1LEntv^{y~Y7`%bYb(JZF?tv2;xivPUN zhmW7#cUly3_@o^jS)r?LB!G*h;NVCG{LN$*cfqfqK4dQeH`ArRYja%T_Yxu^@7T;H zZw{P%F9&c?i22e0mX791MU|B?0nH(2W>(Z^K-3}59pcvm76J8F#iz$OXB)i8Tz?K3 zNu*X^T2Es!f>-3f{f~u(h23&4Y_l&kF*$kQ_2o&ob2or%hR%x^`n!g1`LFY@%t5*S zr{3OPTt4@&9Kf|F8{Gk_k4+JgkgdT*lMYH`vG5}%tp^-DO&9N5QGNE0<1TVi$R zqYT^*p=|k99ll-G5=?v5CDtXs5sJ-ny}^8$Y6XB9_BqL)Ze|sw<>V;n=pq1~>ueyP zp;cjaMAbuYBIN;udmcV#3{H&Fc9NF^N#~=va7x*fxXjEkS>}$NYa1DP`6AVFwf2kN zkJjs*x{wy_z^EA$hC8F*%g>&a0kW$UlGHs_2mw7xKK#|EN>g|ZS7BIZK|)yhXipxvE&O9U1{lmgByKWICB8{p}lu-xFj#O~8kz zwzgJpHRk!*8vs~CK%i@}-wcE=J|cuJjQFdM0v-n@Ezbe_lEmeBfs%;q^9#}`$j5Pm@fi6!qg6@S~0G+Uv70)<6HT`@LT0($dk9QB#Kks0R}VXaBM*H6>-R*%&n_cIV?zr{X`Eze~$^z(Haf#MNrF9=X~b!{c?WfK0L3t?|8V%<`+8 zJaBT24hkHw`Zsahk@f5k3BI!Y!+VE^XR9U0(VS}=fICN?;jn{TS1AVHyYK76DIDQ` za?m$0AO&6?;e9nieX`oNXeTg*`0-*jz)*sz4sf#bb%gCKS zeP1#5>O`FcKY-qERP%fmqdNn3!1>@$z;uD6{`DK9erFcovj7mh&1uHf7lP{K(E%9G z0WlZ$wL0~4!P-6$YHR(a{h53zlwon0bbfUE!*cR@(K0caj{f)dfTl28EdQ`g@^&!8 z^WnpXf59A~KN=e@@x%kvJT}Obh1y-t!z5)L$xBS(I1K$Zt%TI%x%88R}^^YXTBTD&MYXkuBs&o?d07V26oCDUhav9tI>^D6<&BX6} zyii%8QQLd@w0jh*L;z%LKqBBQaH85#$@bP7U#7bAI zq`5h5XU8}qGTS3BW+Icb&ilbJbpeQt(P$x#L;ja{-lgDDvCWF6kCI~6fOMTavXkl6 zSuN*ihJ1WgzJKrk{jESpj~N6~Ll9}XyyH7GCw+}8WHF_RD>ITQ)DaU4*5)M11liyh zNXG5~xk}96*qIsS!6>}#tMdRda`G0yQM!B)D&rUA(F-H_`H0+_6+P?*#PZz*Paq!9 z1;L|k0g0Tz>T5y*=3oqwtYz8pTMQxKRXHr@5bs1ce7y8-J_4a>FfFm0>jyfS2%v@O zEJh*P$CJ5&xqwTnHRywH@|Dxnl#C@V@_T!g%Q{$76yX7qh?Bl91YEPJNV-fqnyd-e z%_NI5i~t~(VPaygXmJV(=H8t`60@?#JAIzo!5#aP*+z#mMg4<=;2%BjnsSV``ojU7 zK#>;OG0v1R@mfDk`kDe}*<;*L%rF%j^_#n2wn}b;*jZZA0lNwW%RuJN`+ON&dR;(j zKz92tJOe3RJtXx{cvMs&VDCKwCw}04;V#UrSZM;ILAW}O`7j|MplCK&suty%^#&sW z_$&|ri?#NfB3D;ebhi@?SkwwNwyQi6F@)0p%^URKj*ucrj3;L!QHiQHC%0e$H-oL7PLTONyzOO48$dn&GOBvllv&J{qE0~LqQ&Cbr;)iF3*iX zh!_I0#*ggqS{ALBhZ;2RNz4rX2VLv5I)?)g)#Jn?GBR@Yu`#XW_Fy_l@Ocxx$9urP zvRlI_YW=mFzkf>qFXW_Hu)<=}NvET%hL^KL`Ew5RH2K5K>@1um3b1qIKo#KKBUf8^ zUy`3f-9)Y%S^v6Z4{qXzs1GuqV$GMpMY6GOx4m(r`IaB|f8yv}+-qxe&NNu5?@eaQ z7Dz+~-tKA`=1v=1m^vbkfkQO6Fo47c?2pWt&Fj_(WVOjZ;p&FR#_Bw84!&)kx-BPU zWm*5fimpAL3H6O{(}>MwY0R}Y=hw0E!=$yOs3y0}+$u$>HJ6$qLM6NAELXj4Pc%v>h9$+B7;<82302K2(6P%=^wDhc-VfLOP_=^DO!(Jb1N#;b5{VXX#pW8vW(Sw?Oaj1?K9igpF~qIXD_|3v&7IIdt2ja1nKSB)Uf0!{vWip+Cq_WPRx(ol&k7?dymf^zB;lxMn^m^YKm~`ju?spTtsUMzUK{e#5 zOt!|s;if!emZ`XhUE;k!$LQ4Y-ftV)zo;IJ9t~Fvzg*VAyr}h{Ww z&-U~+`}f9lWEO+NCeMFMEG#T^BoJy^T1vo$&_J9H{TqwS8u6hK60%{XirYNpMaXq_e~#s=VZhk=(3P~P9OgkZjB2wOjXxxXmE@m zc}I}fu|mZ;cw&XZJNT%WAi ze7)=rFn+O+nT!4TT)A2CCxwv?k7+dGH%(`ax3v73Tt&0SeGO?+uPsk1+{$=+`^oux zegAmuOZ(=XTl-wnnFn?v7Y z+}>c;hKJTNE5gWj0Onc`3hap|z;kK(b0ZN-@@HNL?Ds*c#%*THhN@(|vNQ!==S)5{MDO)XVvXsA z5R6f+&U!*CItdSec-4_sbyGJ4HzgPs8$E@GafoHW4~GV{?5=o9{XYGFxZ^Q>EGsdR z{>O&i8%?NxgC>v07Io(1Q7DLuUZPJqaUxem8(~idPGVzM7uABkoC5}NR`labuGBT8 zdBXH^MI~pKl?lY3z#Cgy`;Aj&gHI0sy1WR|YQ#&rcrpZBQcs59xm{+czyZ3){iB4J zfdXA*T-}Xn4o(4U<^)YrrCl!HZIfG9pw5vYhhQ@t;yHl+c9-ajoBcC>j~MY!zDA|$1SB+#hWqL|r02v!OL$#QCZQwjp9?4VDGjBr7iQQi4ENaJyFi0*VqBxj zjLV5&Ouy3eGD(BlhvT4Addf4(MhWkG;Wlvm3#vp&n!YFWa!q@oe%@Q~efzf9`^1S7 z9E!^!-_5cY>Siiu#od8D$Fg>4O((ojn9}pTdOMRLYU? z8qw!#nLp8_lj+Ja7HL*)y*$!A@{3WCv2I`=$%cN($s%;b|6WOmu@cKiYxteIh^G-R zH?JPv_W-s*!VBxg5p0U55Vcs%F&_lKt@D*ZKECouw0=5t<7b9}rj0mps|NVR{sdoX z)C2L>D~Arw9yg_}Uyhs8SD3yV?YCcTIwxo ze!5L>&d|(|0x{ObuBSr#a$(Blnha%8r>%!%<>6bt3aN-Lcxch!b)DS}Wp~$0;20#pmr7~a8k_eYH-C1| literal 0 HcmV?d00001 diff --git a/docs/source/audio_processor.md b/docs/source/audio_processor.md new file mode 100644 index 0000000000..1a7bf8aebc --- /dev/null +++ b/docs/source/audio_processor.md @@ -0,0 +1,25 @@ +# AudioProcessor + +`TTS.utils.audio.AudioProcessor` is the core class for all the audio processing routines. It provides an API for + +- Feature extraction. +- Sound normalization. +- Reading and writing audio files. +- Sampling audio signals. +- Normalizing and denormalizing audio signals. +- Griffin-Lim vocoder. + +The `AudioProcessor` needs to be initialized with `TTS.config.shared_configs.BaseAudioConfig`. Any model config +also must inherit or initiate `BaseAudioConfig`. + +## AudioProcessor +```{eval-rst} +.. autoclass:: TTS.utils.audio.AudioProcessor + :members: +``` + +## BaseAudioConfig +```{eval-rst} +.. autoclass:: TTS.config.shared_configs.BaseAudioConfig + :members: +``` \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py new file mode 100644 index 0000000000..87c91d96f3 --- /dev/null +++ b/docs/source/conf.py @@ -0,0 +1,102 @@ +# Configuration file for the Sphinx documentation builder. +# +# This file only contains a selection of the most common options. For a full +# list see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# +import os +import sys + +sys.path.insert(0, os.path.abspath('../../TTS')) +autodoc_mock_imports = ["tts"] + +# -- Project information ----------------------------------------------------- +project = 'TTS' +copyright = "2021 Coqui GmbH, 2020 TTS authors" +author = 'Coqui GmbH' + +with open("../../TTS/VERSION", "r") as ver: + version = ver.read().strip() + +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +release = version + +# The main toctree document. +master_doc = "index" + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ +] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'TODO/*'] + +source_suffix = [".rst", ".md"] + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = 'furo' +html_tite = "TTS" +html_theme_options = { + "light_logo": "logo.png", + "dark_logo": "logo.png", + "sidebar_hide_name": True, +} + +html_sidebars = { + '**': [ + "sidebar/scroll-start.html", + "sidebar/brand.html", + "sidebar/search.html", + "sidebar/navigation.html", + "sidebar/ethical-ads.html", + "sidebar/scroll-end.html", + ] + } + + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['_static'] + + +# using markdown +extensions = [ + 'sphinx.ext.autodoc', + 'sphinx.ext.autosummary', + 'sphinx.ext.doctest', + 'sphinx.ext.intersphinx', + 'sphinx.ext.todo', + 'sphinx.ext.coverage', + 'sphinx.ext.napoleon', + 'sphinx.ext.viewcode', + 'sphinx.ext.autosectionlabel', + 'myst_parser', + "sphinx_copybutton", + "sphinx_inline_tabs", +] + +# 'sphinxcontrib.katex', +# 'sphinx.ext.autosectionlabel', diff --git a/docs/source/configuration.md b/docs/source/configuration.md new file mode 100644 index 0000000000..cde7e073e9 --- /dev/null +++ b/docs/source/configuration.md @@ -0,0 +1,59 @@ +# Configuration + +We use 👩‍✈️[Coqpit] for configuration management. It provides basic static type checking and serialization capabilities on top of native Python `dataclasses`. Here is how a simple configuration looks like with Coqpit. + +```python +from dataclasses import asdict, dataclass, field +from typing import List, Union +from coqpit.coqpit import MISSING, Coqpit, check_argument + + +@dataclass +class SimpleConfig(Coqpit): + val_a: int = 10 + val_b: int = None + val_d: float = 10.21 + val_c: str = "Coqpit is great!" + vol_e: bool = True + # mandatory field + # raise an error when accessing the value if it is not changed. It is a way to define + val_k: int = MISSING + # optional field + val_dict: dict = field(default_factory=lambda: {"val_aa": 10, "val_ss": "This is in a dict."}) + # list of list + val_listoflist: List[List] = field(default_factory=lambda: [[1, 2], [3, 4]]) + val_listofunion: List[List[Union[str, int, bool]]] = field( + default_factory=lambda: [[1, 3], [1, "Hi!"], [True, False]] + ) + + def check_values( + self, + ): # you can define explicit constraints manually or by`check_argument()` + """Check config fields""" + c = asdict(self) # avoid unexpected changes on `self` + check_argument("val_a", c, restricted=True, min_val=10, max_val=2056) + check_argument("val_b", c, restricted=True, min_val=128, max_val=4058, allow_none=True) + check_argument("val_c", c, restricted=True) +``` + +In TTS, each model must have a configuration class that exposes all the values necessary for its lifetime. + +It defines model architecture, hyper-parameters, training, and inference settings. For our models, we merge all the fields in a single configuration class for ease. It may not look like a wise practice but enables easier bookkeeping and reproducible experiments. + +The general configuration hierarchy looks like below: + +``` +ModelConfig() + | + | -> ... # model specific configurations + | -> ModelArgs() # model class arguments + | -> BaseDatasetConfig() # only for tts models + | -> BaseXModelConfig() # Generic fields for `tts` and `vocoder` models. + | + | -> BaseTrainingConfig() # trainer fields + | -> BaseAudioConfig() # audio processing fields +``` + +In the example above, ```ModelConfig()``` is the final configuration that the model receives and it has all the fields necessary for the model. + +We host pre-defined model configurations under ```TTS//configs/```.Although we recommend a unified config class, you can decompose it as you like as for your custom models as long as all the fields for the trainer, model, and inference APIs are provided. \ No newline at end of file diff --git a/docs/source/contributing.md b/docs/source/contributing.md new file mode 100644 index 0000000000..5b2725094f --- /dev/null +++ b/docs/source/contributing.md @@ -0,0 +1,3 @@ +```{include} ../../CONTRIBUTING.md +:relative-images: +``` diff --git a/docs/source/converting_torch_to_tf.md b/docs/source/converting_torch_to_tf.md new file mode 100644 index 0000000000..6b992eb0d6 --- /dev/null +++ b/docs/source/converting_torch_to_tf.md @@ -0,0 +1,21 @@ +# Converting Torch Tacotron to TF 2 + +Currently, 🐸TTS supports the vanilla Tacotron2 and MelGAN models in TF 2.It does not support advanced attention methods and other small tricks used by the Torch models. You can convert any Torch model trained after v0.0.2. + +You can also export TF 2 models to TFLite for even faster inference. + +## How to convert from Torch to TF 2.0 +Make sure you installed Tensorflow v2.2. It is not installed by default by :frog: TTS. + +All the TF related code stays under ```tf``` folder. + +To convert a **compatible** Torch model, run the following command with the right arguments: + +```bash +python TTS/bin/convert_tacotron2_torch_to_tf.py\ + --torch_model_path /path/to/torch/model.pth.tar \ + --config_path /path/to/model/config.json\ + --output_path /path/to/output/tf/model +``` + +This will create a TF model file. Notice that our model format is not compatible with the official TF checkpoints. We created our custom format to match Torch checkpoints we use. Therefore, use the ```load_checkpoint``` and ```save_checkpoint``` functions provided under ```TTS.tf.generic_utils```. diff --git a/docs/source/dataset.md b/docs/source/dataset.md new file mode 100644 index 0000000000..92d381aca5 --- /dev/null +++ b/docs/source/dataset.md @@ -0,0 +1,25 @@ +# Datasets + +## TTS Dataset + +```{eval-rst} +.. autoclass:: TTS.tts.datasets.TTSDataset + :members: +``` + +## Vocoder Dataset + +```{eval-rst} +.. autoclass:: TTS.vocoder.datasets.gan_dataset.GANDataset + :members: +``` + +```{eval-rst} +.. autoclass:: TTS.vocoder.datasets.wavegrad_dataset.WaveGradDataset + :members: +``` + +```{eval-rst} +.. autoclass:: TTS.vocoder.datasets.wavernn_dataset.WaveRNNDataset + :members: +``` \ No newline at end of file diff --git a/docs/source/faq.md b/docs/source/faq.md new file mode 100644 index 0000000000..6f5de6d83c --- /dev/null +++ b/docs/source/faq.md @@ -0,0 +1,114 @@ +# Humble FAQ +We tried to collect common issues and questions we receive about 🐸TTS. It is worth checking before going deeper. + +## Errors with a pre-trained model. How can I resolve this? +- Make sure you use the right commit version of 🐸TTS. Each pre-trained model has its corresponding version that needs to be used. It is defined on the model table. +- If it is still problematic, post your problem on [Discussions](https://github.com/coqui-ai/TTS/discussions). Please give as many details as possible (error message, your TTS version, your TTS model and config.json etc.) +- If you feel like it's a bug to be fixed, then prefer Github issues with the same level of scrutiny. + +## What are the requirements of a good 🐸TTS dataset? +* https://github.com/coqui-ai/TTS/wiki/What-makes-a-good-TTS-dataset + +## How should I choose the right model? +- First, train Tacotron. It is smaller and faster to experiment with. If it performs poorly, try Tacotron2. +- Tacotron models produce the most natural voice if your dataset is not too noisy. +- If both models do not perform well and especially the attention does not align, then try AlignTTS or GlowTTS. +- If you need faster models, consider SpeedySpeech, GlowTTS or AlignTTS. Keep in mind that SpeedySpeech requires a pre-trained Tacotron or Tacotron2 model to compute text-to-speech alignments. + +## How can I train my own `tts` model? +0. Check your dataset with notebooks in [dataset_analysis](https://github.com/coqui-ai/TTS/tree/master/notebooks/dataset_analysis) folder. Use [this notebook](https://github.com/coqui-ai/TTS/blob/master/notebooks/dataset_analysis/CheckSpectrograms.ipynb) to find the right audio processing parameters. A better set of parameters results in a better audio synthesis. + +1. Write your own dataset `formatter` in `datasets/formatters.py` or format your dataset as one of the supported datasets, like LJSpeech. + A `formatter` parses the metadata file and converts a list of training samples. + +2. If you have a dataset with a different alphabet than English, you need to set your own character list in the ```config.json```. + - If you use phonemes for training and your language is supported [here](https://github.com/rhasspy/gruut#supported-languages), you don't need to set your character list. + - You can use `TTS/bin/find_unique_chars.py` to get characters used in your dataset. + +3. Write your own text cleaner in ```utils.text.cleaners```. It is not always necessary, except when you have a different alphabet or language-specific requirements. + - A `cleaner` performs number and abbreviation expansion and text normalization. Basically, it converts the written text to its spoken format. + - If you go lazy, you can try using ```basic_cleaners```. + +4. Fill in a ```config.json```. Go over each parameter one by one and consider it regarding the appended explanation. + - Check the `Coqpit` class created for your target model. Coqpit classes for `tts` models are under `TTS/tts/configs/`. + - You just need to define fields you need/want to change in your `config.json`. For the rest, their default values are used. + - 'sample_rate', 'phoneme_language' (if phoneme enabled), 'output_path', 'datasets', 'text_cleaner' are the fields you need to edit in most of the cases. + - Here is a sample `config.json` for training a `GlowTTS` network. + ```json + { + "model": "glow_tts", + "batch_size": 32, + "eval_batch_size": 16, + "num_loader_workers": 4, + "num_eval_loader_workers": 4, + "run_eval": true, + "test_delay_epochs": -1, + "epochs": 1000, + "text_cleaner": "english_cleaners", + "use_phonemes": false, + "phoneme_language": "en-us", + "phoneme_cache_path": "phoneme_cache", + "print_step": 25, + "print_eval": true, + "mixed_precision": false, + "output_path": "recipes/ljspeech/glow_tts/", + "test_sentences": ["Test this sentence.", "This test sentence.", "Sentence this test."], + "datasets":[{"name": "ljspeech", "meta_file_train":"metadata.csv", "path": "recipes/ljspeech/LJSpeech-1.1/"}] + } + ``` + +6. Train your model. + - SingleGPU training: ```CUDA_VISIBLE_DEVICES="0" python train_tts.py --config_path config.json``` + - MultiGPU training: ```CUDA_VISIBLE_DEVICES="0,1,2" python distribute.py --script train_tts.py --config_path config.json``` + - This command uses all the GPUs given in ```CUDA_VISIBLE_DEVICES```. If you don't specify, it uses all the GPUs available. + +**Note:** You can also train your model using pure 🐍 python. Check ```{eval-rst} :ref: 'tutorial_for_nervous_beginners'```. + +## How can I train in a different language? +- Check steps 2, 3, 4, 5 above. + +## How can I train multi-GPUs? +- Check step 5 above. + +## How can I check model performance? +- You can inspect model training and performance using ```tensorboard```. It will show you loss, attention alignment, model output. Go with the order below to measure the model performance. +1. Check ground truth spectrograms. If they do not look as they are supposed to, then check audio processing parameters in ```config.json```. +2. Check train and eval losses and make sure that they all decrease smoothly in time. +3. Check model spectrograms. Especially, training outputs should look similar to ground truth spectrograms after ~10K iterations. +4. Your model would not work well at test time until the attention has a near diagonal alignment. This is the sublime art of TTS training. + - Attention should converge diagonally after ~50K iterations. + - If attention does not converge, the probabilities are; + - Your dataset is too noisy or small. + - Samples are too long. + - Batch size is too small (batch_size < 32 would be having a hard time converging) + - You can also try other attention algorithms like 'graves', 'bidirectional_decoder', 'forward_attn'. + - 'bidirectional_decoder' is your ultimate savior, but it trains 2x slower and demands 1.5x more GPU memory. + - You can also try the other models like AlignTTS or GlowTTS. + +## How do I know when to stop training? +There is no single objective metric to decide the end of a training since the voice quality is a subjective matter. + +In our model trainings, we follow these steps; + +- Check test time audio outputs, if it does not improve more. +- Check test time attention maps, if they look clear and diagonal. +- Check validation loss, if it converged and smoothly went down or started to overfit going up. +- If the answer is YES for all of the above, then test the model with a set of complex sentences. For English, you can use the `TestAttention` notebook. + +Keep in mind that the approach above only validates the model robustness. It is hard to estimate the voice quality without asking the actual people. +The best approach is to pick a set of promising models and run a Mean-Opinion-Score study asking actual people to score the models. + +## My model does not learn. How can I debug? +- Go over the steps under "How can I check model performance?" + +## Attention does not align. How can I make it work? +- Check the 4th step under "How can I check model performance?" + +## How can I test a trained model? +- The best way is to use `tts` or `tts-server` commands. For details check {ref}`here `. +- If you need to code your own ```TTS.utils.synthesizer.Synthesizer``` class. + +## My Tacotron model does not stop - I see "Decoder stopped with 'max_decoder_steps" - Stopnet does not work. +- In general, all of the above relates to the `stopnet`. It is the part of the model telling the `decoder` when to stop. +- In general, a poor `stopnet` relates to something else that is broken in your model or dataset. Especially the attention module. +- One common reason is the silent parts in the audio clips at the beginning and the ending. Check ```trim_db``` value in the config. You can find a better value for your dataset by using ```CheckSpectrogram``` notebook. If this value is too small, too much of the audio will be trimmed. If too big, then too much silence will remain. Both will curtail the `stopnet` performance. \ No newline at end of file diff --git a/docs/source/formatting_your_dataset.md b/docs/source/formatting_your_dataset.md new file mode 100644 index 0000000000..cc0e456a38 --- /dev/null +++ b/docs/source/formatting_your_dataset.md @@ -0,0 +1,82 @@ +# Formatting Your Dataset + +For training a TTS model, you need a dataset with speech recordings and transcriptions. The speech must be divided into audio clips and each clip needs transcription. + +If you have a single audio file and you need to split it into clips, there are different open-source tools for you. We recommend Audacity. It is an open-source and free audio editing software. + +It is also important to use a lossless audio file format to prevent compression artifacts. We recommend using `wav` file format. + +Let's assume you created the audio clips and their transcription. You can collect all your clips under a folder. Let's call this folder `wavs`. + +``` +/wavs + | - audio1.wav + | - audio2.wav + | - audio3.wav + ... +``` + +You can either create separate transcription files for each clip or create a text file that maps each audio clip to its transcription. In this file, each line must be delimitered by a special character separating the audio file name from the transcription. And make sure that the delimiter is not used in the transcription text. + +We recommend the following format delimited by `|`. + +``` +# metadata.txt + +audio1.wav | This is my sentence. +audio2.wav | This is maybe my sentence. +audio3.wav | This is certainly my sentence. +audio4.wav | Let this be your sentence. +... +``` + +In the end, we have the following folder structure +``` +/MyTTSDataset + | + | -> metadata.txt + | -> /wavs + | -> audio1.wav + | -> audio2.wav + | ... +``` + +The format above is taken from widely-used the [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) dataset. You can also download and see the dataset. 🐸TTS already provides tooling for the LJSpeech. if you use the same format, you can start training your models right away. + +## Dataset Quality + +Your dataset should have good coverage of the target language. It should cover the phonemic variety, exceptional sounds and syllables. This is extremely important for especially non-phonemic languages like English. + +For more info about dataset qualities and properties check our [post](https://github.com/coqui-ai/TTS/wiki/What-makes-a-good-TTS-dataset). + +## Using Your Dataset in 🐸TTS + +After you collect and format your dataset, you need to check two things. Whether you need a `formatter` and a `text_cleaner`. The `formatter` loads the text file (created above) as a list and the `text_cleaner` performs a sequence of text normalization operations that converts the raw text into the spoken representation (e.g. converting numbers to text, acronyms, and symbols to the spoken format). + +If you use a different dataset format then the LJSpeech or the other public datasets that 🐸TTS supports, then you need to write your own `formatter`. + +If your dataset is in a new language or it needs special normalization steps, then you need a new `text_cleaner`. + +What you get out of a `formatter` is a `List[List[]]` in the following format. + +``` +>>> formatter(metafile_path) +[["audio1.wav", "This is my sentence.", "MyDataset"], +["audio1.wav", "This is maybe a sentence.", "MyDataset"], +... +] +``` + +Each sub-list is parsed as ```["", "", "]```. +`````` is the dataset name for single speaker datasets and it is mainly used +in the multi-speaker models to map the speaker of the each sample. But for now, we only focus on single speaker datasets. + +The purpose of a `formatter` is to parse your metafile and load the audio file paths and transcriptions. Then, its output passes to a `Dataset` object. It computes features from the audio signals, calls text normalization routines, and converts raw text to +phonemes if needed. + +See `TTS.tts.datasets.TTSDataset`, a generic `Dataset` implementation for the `tts` models. + +See `TTS.vocoder.datasets.*`, for different `Dataset` implementations for the `vocoder` models. + +See `TTS.utils.audio.AudioProcessor` that includes all the audio processing and feature extraction functions used in a +`Dataset` implementation. Feel free to add things as you need.passed \ No newline at end of file diff --git a/docs/source/implementing_a_new_model.md b/docs/source/implementing_a_new_model.md new file mode 100644 index 0000000000..5a9aaae7e1 --- /dev/null +++ b/docs/source/implementing_a_new_model.md @@ -0,0 +1,61 @@ +# Implementing a Model + +1. Implement layers. + + You can either implement the layers under `TTS/tts/layers/new_model.py` or in the model file `TTS/tts/model/new_model.py`. + You can also reuse layers already implemented. + +2. Test layers. + + We keep tests under `tests` folder. You can add `tts` layers tests under `tts_tests` folder. + Basic tests are checking input-output tensor shapes and output values for a given input. Consider testing extreme cases that are more likely to cause problems like `zero` tensors. + +3. Implement loss function. + + We keep loss functions under `TTS/tts/layers/losses.py`. You can also mix-and-match implemented loss functions as you like. + + A loss function returns a dictionary in a format ```{’loss’: loss, ‘loss1’:loss1 ...}``` and the dictionary must at least define the `loss` key which is the actual value used by the optimizer. All the items in the dictionary are automatically logged on the terminal and the Tensorboard. + +4. Test the loss function. + + As we do for the layers, you need to test the loss functions too. You need to check input/output tensor shapes, + expected output values for a given input tensor. For instance, certain loss functions have upper and lower limits and + it is a wise practice to test with the inputs that should produce these limits. + +5. Implement `MyModel`. + + In 🐸TTS, a model class is a self-sufficient implementation of a model directing all the interactions with the other + components. It is enough to implement the API provided by the `BaseModel` class to comply. + + A model interacts with the `Trainer API` for training, `Synthesizer API` for inference and testing. + + A 🐸TTS model must return a dictionary by the `forward()` and `inference()` functions. This dictionary must also include the `model_outputs` key that is considered as the main model output by the `Trainer` and `Synthesizer`. + + You can place your `tts` model implementation under `TTS/tts/models/new_model.py` then inherit and implement the `BaseTTS`. + + There is also the `callback` interface by which you can manipulate both the model and the `Trainer` states. Callbacks give you + the infinite flexibility to add custom behaviours for your model and training routines. + + For more details, see {ref}`BaseTTS ` and `TTS/utils/callbacks.py`. + +6. Optionally, define `MyModelArgs`. + + `MyModelArgs` is a 👨‍✈️Coqpit class that sets all the class arguments of the `MyModel`. It should be enough to pass + an `MyModelArgs` instance to initiate the `MyModel`. + +7. Test `MyModel`. + + As the layers and the loss functions, it is recommended to test your model. One smart way for testing is that you + create two models with the exact same weights. Then we run a training loop with one of these models and + compare the weights with the other model. All the weights need to be different in a passing test. Otherwise, it + is likely that a part of the model is malfunctioning or not even attached to the model's computational graph. + +8. Define `MyModelConfig`. + + Place `MyModelConfig` file under `TTS/models/configs`. It is enough to inherit the `BaseTTSConfig` to make your + config compatible with the `Trainer`. You should also include `MyModelArgs` as a field if defined. The rest of the fields should define the model + specific values and parameters. + +9. Write Docstrings. + + We love you more when you document your code. ❤️ diff --git a/docs/source/index.md b/docs/source/index.md new file mode 100644 index 0000000000..82792feef3 --- /dev/null +++ b/docs/source/index.md @@ -0,0 +1,40 @@ + +```{include} ../../README.md +:relative-images: +``` + +---- + +# Documentation Content +```{eval-rst} +.. toctree:: + :maxdepth: 2 + :caption: Get started + + tutorial_for_nervous_beginners + installation + faq + contributing + +.. toctree:: + :maxdepth: 2 + :caption: Using 🐸TTS + + inference + implementing_a_new_model + training_a_model + configuration + formatting_your_dataset + what_makes_a_good_dataset + tts_datasets + +.. toctree:: + :maxdepth: 2 + :caption: Main Classes + + trainer_api + audio_processor + model_api + configuration + dataset +``` \ No newline at end of file diff --git a/docs/source/inference.md b/docs/source/inference.md new file mode 100644 index 0000000000..544473bf76 --- /dev/null +++ b/docs/source/inference.md @@ -0,0 +1,103 @@ +(synthesizing_speech)= +# Synthesizing Speech + +First, you need to install TTS. We recommend using PyPi. You need to call the command below: + +```bash +$ pip install TTS +``` + +After the installation, 2 terminal commands are available. + +1. TTS Command Line Interface (CLI). - `tts` +2. Local Demo Server. - `tts-server` + +## On the Commandline - `tts` +![cli.gif](https://github.com/coqui-ai/TTS/raw/main/images/tts_cli.gif) + +After the installation, 🐸TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under 🐸TTS. + +Listing released 🐸TTS models. + +```bash +tts --list_models +``` + +Run a TTS model, from the release models list, with its default vocoder. (Simply copy and paste the full model names from the list as arguments for the command below.) + +```bash +tts --text "Text for TTS" \ + --model_name "///" \ + --out_path folder/to/save/output.wav +``` + +Run a tts and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model. + +```bash +tts --text "Text for TTS" \ + --model_name "///" \ + --vocoder_name "///" \ + --out_path folder/to/save/output.wav +``` + +Run your own TTS model (Using Griffin-Lim Vocoder) + +```bash +tts --text "Text for TTS" \ + --model_path path/to/model.pth.tar \ + --config_path path/to/config.json \ + --out_path folder/to/save/output.wav +``` + +Run your own TTS and Vocoder models + +```bash +tts --text "Text for TTS" \ + --config_path path/to/config.json \ + --model_path path/to/model.pth.tar \ + --out_path folder/to/save/output.wav \ + --vocoder_path path/to/vocoder.pth.tar \ + --vocoder_config_path path/to/vocoder_config.json +``` + +Run a multi-speaker TTS model from the released models list. + +```bash +tts --model_name "///" --list_speaker_idxs # list the possible speaker IDs. +tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "//" --speaker_idx "" +``` + +**Note:** You can use ```./TTS/bin/synthesize.py``` if you prefer running ```tts``` from the TTS project folder. + +## On the Demo Server - `tts-server` + + +![server.gif](https://github.com/coqui-ai/TTS/raw/main/images/demo_server.gif) + +You can boot up a demo 🐸TTS server to run an inference with your models. Note that the server is not optimized for performance +but gives you an easy way to interact with the models. + +The demo server provides pretty much the same interface as the CLI command. + +```bash +tts-server -h # see the help +tts-server --list_models # list the available models. +``` + +Run a TTS model, from the release models list, with its default vocoder. +If the model you choose is a multi-speaker TTS model, you can select different speakers on the Web interface and synthesize +speech. + +```bash +tts-server --model_name "///" +``` + +Run a TTS and a vocoder model from the released model list. Note that not every vocoder is compatible with every TTS model. + +```bash +tts-server --model_name "///" \ + --vocoder_name "///" +``` + +## TorchHub +You can also use [this simple colab notebook](https://colab.research.google.com/drive/1iAe7ZdxjUIuN6V4ooaCt0fACEGKEn7HW?usp=sharing) using TorchHub to synthesize speech. \ No newline at end of file diff --git a/docs/source/installation.md b/docs/source/installation.md new file mode 100644 index 0000000000..6532ee8e6c --- /dev/null +++ b/docs/source/installation.md @@ -0,0 +1,39 @@ +# Installation + +🐸TTS supports python >=3.6 <=3.9 and tested on Ubuntu 18.10, 19.10, 20.10. + +## Using `pip` + +`pip` is recommended if you want to use 🐸TTS only for inference. + +You can install from PyPI as follows: + +```bash +pip install TTS # from PyPI +``` + +By default, this only installs the requirements for PyTorch. To install the tensorflow dependencies as well, use the `tf` extra. + +```bash +pip install TTS[tf] +``` + +Or install from Github: + +```bash +pip install git+https://github.com/coqui-ai/TTS # from Github +``` + +## Installing From Source + +This is recommended for development and more control over 🐸TTS. + +```bash +git clone https://github.com/coqui-ai/TTS/ +cd TTS +make system-deps # only on Linux systems. +make install +``` + +## On Windows +If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/ \ No newline at end of file diff --git a/docs/source/make.bat b/docs/source/make.bat new file mode 100644 index 0000000000..922152e96a --- /dev/null +++ b/docs/source/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=. +set BUILDDIR=_build + +if "%1" == "" goto help + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.http://sphinx-doc.org/ + exit /b 1 +) + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/docs/source/model_api.md b/docs/source/model_api.md new file mode 100644 index 0000000000..438901b7bd --- /dev/null +++ b/docs/source/model_api.md @@ -0,0 +1,24 @@ +# Model API +Model API provides you a set of functions that easily make your model compatible with the `Trainer`, +`Synthesizer` and `ModelZoo`. + +## Base TTS Model + +```{eval-rst} +.. autoclass:: TTS.model.BaseModel + :members: +``` + +## Base `tts` Model + +```{eval-rst} +.. autoclass:: TTS.tts.models.base_tts.BaseTTS + :members: +``` + +## Base `vocoder` Model + +```{eval-rst} +.. autoclass:: TTS.tts.models.base_vocoder.BaseVocoder` + :members: +``` \ No newline at end of file diff --git a/docs/source/readthedocs.yml b/docs/source/readthedocs.yml new file mode 100644 index 0000000000..59eed1f74e --- /dev/null +++ b/docs/source/readthedocs.yml @@ -0,0 +1,17 @@ +# .readthedocs.yml +# Read the Docs configuration file +# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details + +# Required +version: 2 + +# Build documentation in the docs/ directory with Sphinx +sphinx: + builder: html + configuration: docs/conf.py + +# Optionally set the version of Python and requirements required to build your docs +python: + version: 3.8 + install: + - requirements: doc/requirements.txt \ No newline at end of file diff --git a/docs/source/trainer_api.md b/docs/source/trainer_api.md new file mode 100644 index 0000000000..a5c3cfb750 --- /dev/null +++ b/docs/source/trainer_api.md @@ -0,0 +1,17 @@ +# Trainer API + +The {class}`TTS.trainer.Trainer` provides a lightweight, extensible, and feature-complete training run-time. We optimized it for 🐸 but +can also be used for any DL training in different domains. It supports distributed multi-gpu, mixed-precision (apex or torch.amp) training. + + +## Trainer +```{eval-rst} +.. autoclass:: TTS.trainer.Trainer + :members: +``` + +## TrainingArgs +```{eval-rst} +.. autoclass:: TTS.trainer.TrainingArgs + :members: +``` \ No newline at end of file diff --git a/docs/source/training_a_model.md b/docs/source/training_a_model.md new file mode 100644 index 0000000000..a7e81f2871 --- /dev/null +++ b/docs/source/training_a_model.md @@ -0,0 +1,165 @@ +# Training a Model + +1. Decide what model you want to use. + + Each model has a different set of pros and cons that define the run-time efficiency and the voice quality. It is up to you to decide what model servers your needs. Other than referring to the papers, one easy way is to test the 🐸TTS + community models and see how fast and good each of the models. Or you can start a discussion on our communication channels. + +2. Understand the configuration class, its fields and values of your model. + + For instance, if you want to train a `Tacotron` model then see the `TacotronConfig` class and make sure you understand it. + +3. Go to the recipes and check the recipe of your target model. + + Recipes do not promise perfect models but they provide a good start point for `Nervous Beginners`. A recipe script training + a `GlowTTS` model on `LJSpeech` dataset looks like below. Let's be creative and call this script `train_glowtts.py`. + + ```python + # train_glowtts.py + + import os + + from TTS.tts.configs import GlowTTSConfig + from TTS.tts.configs import BaseDatasetConfig + from TTS.trainer import init_training, Trainer, TrainingArgs + + + output_path = os.path.dirname(os.path.abspath(__file__)) + dataset_config = BaseDatasetConfig(name="ljspeech", meta_file_train="metadata.csv", path=os.path.join(output_path, "../LJSpeech-1.1/")) + config = GlowTTSConfig( + batch_size=32, + eval_batch_size=16, + num_loader_workers=4, + num_eval_loader_workers=4, + run_eval=True, + test_delay_epochs=-1, + epochs=1000, + text_cleaner="english_cleaners", + use_phonemes=False, + phoneme_language="en-us", + phoneme_cache_path=os.path.join(output_path, "phoneme_cache"), + print_step=25, + print_eval=True, + mixed_precision=False, + output_path=output_path, + datasets=[dataset_config] + ) + args, config, output_path, _, c_logger, tb_logger = init_training(TrainingArgs(), config) + trainer = Trainer(args, config, output_path, c_logger, tb_logger) + trainer.fit() + ``` + + You need to change fields of the `BaseDatasetConfig` to match your own dataset and then update `GlowTTSConfig` + fields as you need. + + 4. Run the training. + + You need to call the python training script. + + ```bash + $ CUDA_VISIBLE_DEVICES="0" python train_glowtts.py + ``` + + Notice that you set the GPU you want to use on your system by setting `CUDA_VISIBLE_DEVICES` environment variable. + To see available GPUs on your system, you can use `nvidia-smi` command on the terminal. + + If you like to run a multi-gpu training + + ```bash + $ CUDA_VISIBLE_DEVICES="0, 1, 2" python TTS/bin/distribute.py --script /train_glowtts.py + ``` + + The example above runs a multi-gpu training using GPUs `0, 1, 2`. + + The beginning of a training run looks like below. + + ```console + > Experiment folder: /your/output_path/-Juni-23-2021_02+52-78899209 + > Using CUDA: True + > Number of GPUs: 1 + > Setting up Audio Processor... + | > sample_rate:22050 + | > resample:False + | > num_mels:80 + | > min_level_db:-100 + | > frame_shift_ms:None + | > frame_length_ms:None + | > ref_level_db:20 + | > fft_size:1024 + | > power:1.5 + | > preemphasis:0.0 + | > griffin_lim_iters:60 + | > signal_norm:True + | > symmetric_norm:True + | > mel_fmin:0 + | > mel_fmax:None + | > spec_gain:20.0 + | > stft_pad_mode:reflect + | > max_norm:4.0 + | > clip_norm:True + | > do_trim_silence:True + | > trim_db:45 + | > do_sound_norm:False + | > stats_path:None + | > base:10 + | > hop_length:256 + | > win_length:1024 + | > Found 13100 files in /your/dataset/path/ljspeech/LJSpeech-1.1 + > Using model: glow_tts + + > Model has 28356129 parameters + + > EPOCH: 0/1000 + + > DataLoader initialization + | > Use phonemes: False + | > Number of instances : 12969 + | > Max length sequence: 187 + | > Min length sequence: 5 + | > Avg length sequence: 98.3403500655409 + | > Num. instances discarded by max-min (max=500, min=3) seq limits: 0 + | > Batch group size: 0. + + > TRAINING (2021-06-23 14:52:54) + + --> STEP: 0/405 -- GLOBAL_STEP: 0 + | > loss: 2.34670 + | > log_mle: 1.61872 + | > loss_dur: 0.72798 + | > align_error: 0.52744 + | > current_lr: 2.5e-07 + | > grad_norm: 5.036039352416992 + | > step_time: 5.8815 + | > loader_time: 0.0065 + ... + ``` + +5. Run the Tensorboard. + + ```bash + $ tensorboard --logdir= + ``` + +6. Check the logs and the Tensorboard and monitor the training. + + On the terminal and Tensorboard, you can monitor the losses and their changes over time. Also Tensorboard provides certain figures and sample outputs. + + Note that different models have different metrics, visuals and outputs to be displayed. + + You should also check the [FAQ page](https://github.com/coqui-ai/TTS/wiki/FAQ) for common problems and solutions + that occur in a training. + +7. Use your best model for inference. + + Use `tts` or `tts-server` commands for testing your models. + + ```bash + $ tts --text "Text for TTS" \ + --model_path path/to/checkpoint_x.pth.tar \ + --config_path path/to/config.json \ + --out_path folder/to/save/output.wav + ``` + +8. Return to the step 1 and reiterate for training a `vocoder` model. + + In the example above, we trained a `GlowTTS` model, but the same workflow applies to all the other 🐸TTS models. diff --git a/docs/source/tts_datasets.md b/docs/source/tts_datasets.md new file mode 100644 index 0000000000..6075bc95e7 --- /dev/null +++ b/docs/source/tts_datasets.md @@ -0,0 +1,16 @@ +# TTS Datasets + +Some of the known public datasets that we successfully applied 🐸TTS: + +- [English - LJ Speech](https://keithito.com/LJ-Speech-Dataset/) +- [English - Nancy](http://www.cstr.ed.ac.uk/projects/blizzard/2011/lessac_blizzard2011/) +- [English - TWEB](https://www.kaggle.com/bryanpark/the-world-english-bible-speech-dataset) +- [English - LibriTTS](https://openslr.org/60/) +- [English - VCTK](https://datashare.ed.ac.uk/handle/10283/2950) +- [Multilingual - M-AI-Labs](http://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) +- [Spanish](https://drive.google.com/file/d/1Sm_zyBo67XHkiFhcRSQ4YaHPYM0slO_e/view?usp=sharing) - thx! @carlfm01 +- [German - Thorsten OGVD](https://github.com/thorstenMueller/deep-learning-german-tts) +- [Japanese - Kokoro](https://www.kaggle.com/kaiida/kokoro-speech-dataset-v11-small/version/1) +- [Chinese](https://www.data-baker.com/open_source.html) + +Let us know if you use 🐸TTS on a different dataset. \ No newline at end of file diff --git a/docs/source/tutorial_for_nervous_beginners.md b/docs/source/tutorial_for_nervous_beginners.md new file mode 100644 index 0000000000..015e178db6 --- /dev/null +++ b/docs/source/tutorial_for_nervous_beginners.md @@ -0,0 +1,175 @@ +# Tutorial For Nervous Beginners + +## Installation + +User friendly installation. Recommended only for synthesizing voice. + +```bash +$ pip install TTS +``` + +Developer friendly installation. + +```bash +$ git clone https://github.com/coqui-ai/TTS +$ cd TTS +$ pip install -e . +``` + +## Training a `tts` Model + +A breakdown of a simple script training a GlowTTS model on LJspeech dataset. See the comments for the explanation of +each line. + +### Pure Python Way + +```python +import os + +# GlowTTSConfig: all model related values for training, validating and testing. +from TTS.tts.configs import GlowTTSConfig + +# BaseDatasetConfig: defines name, formatter and path of the dataset. +from TTS.tts.configs import BaseDatasetConfig + +# init_training: Initialize and setup the training environment. +# Trainer: Where the ✨️ happens. +# TrainingArgs: Defines the set of arguments of the Trainer. +from TTS.trainer import init_training, Trainer, TrainingArgs + +# we use the same path as this script as our training folder. +output_path = os.path.dirname(os.path.abspath(__file__)) + +# set LJSpeech as our target dataset and define its path so that the Trainer knows what data formatter it needs. +dataset_config = BaseDatasetConfig(name="ljspeech", meta_file_train="metadata.csv", path=os.path.join(output_path, "../LJSpeech-1.1/")) + +# Configure the model. Every config class inherits the BaseTTSConfig to have all the fields defined for the Trainer. +config = GlowTTSConfig( + batch_size=32, + eval_batch_size=16, + num_loader_workers=4, + num_eval_loader_workers=4, + run_eval=True, + test_delay_epochs=-1, + epochs=1000, + text_cleaner="english_cleaners", + use_phonemes=False, + phoneme_language="en-us", + phoneme_cache_path=os.path.join(output_path, "phoneme_cache"), + print_step=25, + print_eval=True, + mixed_precision=False, + output_path=output_path, + datasets=[dataset_config] +) + +# Take the config and the default Trainer arguments, setup the training environment and override the existing +# config values from the terminal. So you can do the following. +# >>> python train.py --coqpit.batch_size 128 +args, config, output_path, _, _, _= init_training(TrainingArgs(), config) + +# Initiate the Trainer. +# Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training, +# distributed training etc. +trainer = Trainer(args, config, output_path) + +# And kick it 🚀 +trainer.fit() +``` + +### CLI Way + +We still support running training from CLI like in the old days. The same training can be started as follows. + +1. Define your `config.json` + + ```json + { + "model": "glow_tts", + "batch_size": 32, + "eval_batch_size": 16, + "num_loader_workers": 4, + "num_eval_loader_workers": 4, + "run_eval": true, + "test_delay_epochs": -1, + "epochs": 1000, + "text_cleaner": "english_cleaners", + "use_phonemes": false, + "phoneme_language": "en-us", + "phoneme_cache_path": "phoneme_cache", + "print_step": 25, + "print_eval": true, + "mixed_precision": false, + "output_path": "recipes/ljspeech/glow_tts/", + "datasets":[{"name": "ljspeech", "meta_file_train":"metadata.csv", "path": "recipes/ljspeech/LJSpeech-1.1/"}] + } + ``` + +2. Start training. + ```bash + $ CUDA_VISIBLE_DEVICES="0" python TTS/bin/train_tts.py --config_path config.json + ``` + + + +## Training a `vocoder` Model + +```python +import os + +from TTS.vocoder.configs import HifiganConfig +from TTS.trainer import init_training, Trainer, TrainingArgs + + +output_path = os.path.dirname(os.path.abspath(__file__)) +config = HifiganConfig( + batch_size=32, + eval_batch_size=16, + num_loader_workers=4, + num_eval_loader_workers=4, + run_eval=True, + test_delay_epochs=-1, + epochs=1000, + seq_len=8192, + pad_short=2000, + use_noise_augment=True, + eval_split_size=10, + print_step=25, + print_eval=True, + mixed_precision=False, + lr_gen=1e-4, + lr_disc=1e-4, + # `vocoder` only needs a data path and they read recursively all the `.wav` files underneath. + data_path=os.path.join(output_path, "../LJSpeech-1.1/wavs/"), + output_path=output_path, +) +args, config, output_path, _, c_logger, tb_logger = init_training(TrainingArgs(), config) +trainer = Trainer(args, config, output_path, c_logger, tb_logger) +trainer.fit() +``` + +❗️ Note that you can also start the training run from CLI as the `tts` model above. + +## Synthesizing Speech + +You can run `tts` and synthesize speech directly on the terminal. + +```bash +$ tts -h # see the help +$ tts --list_models # list the available models. +``` + +![cli.gif](https://github.com/coqui-ai/TTS/raw/main/images/tts_cli.gif) + + +You can call `tts-server` to start a local demo server that you can open it on +your favorite web browser and 🗣️. + +```bash +$ tts-server -h # see the help +$ tts-server --list_models # list the available models. +``` +![server.gif](https://github.com/coqui-ai/TTS/raw/main/images/demo_server.gif) + + + diff --git a/docs/source/what_makes_a_good_dataset.md b/docs/source/what_makes_a_good_dataset.md new file mode 100644 index 0000000000..49a2943bad --- /dev/null +++ b/docs/source/what_makes_a_good_dataset.md @@ -0,0 +1,19 @@ +# What makes a good TTS dataset + +## What Makes a Good Dataset +* **Gaussian like distribution on clip and text lengths**. So plot the distribution of clip lengths and check if it covers enough short and long voice clips. +* **Mistake free**. Remove any wrong or broken files. Check annotations, compare transcript and audio length. +* **Noise free**. Background noise might lead your model to struggle, especially for a good alignment. Even if it learns the alignment, the final result is likely to be suboptimial. +* **Compatible tone and pitch among voice clips**. For instance, if you are using audiobook recordings for your project, it might have impersonations for different characters in the book. These differences between samples downgrade the model performance. +* **Good phoneme coverage**. Make sure that your dataset covers a good portion of the phonemes, di-phonemes, and in some languages tri-phonemes. +* **Naturalness of recordings**. For your model WISIAIL (What it sees is all it learns). Therefore, your dataset should accommodate all the attributes you want to hear from your model. + +## Preprocessing Dataset +If you like to use a bespoken dataset, you might like to perform a couple of quality checks before training. 🐸TTS provides a couple of notebooks (CheckSpectrograms, AnalyzeDataset) to expedite this part for you. + +* **AnalyzeDataset** is for checking dataset distribution in terms of the clip and transcript lengths. It is good to find outlier instances (too long, short text but long voice clip, etc.)and remove them before training. Keep in mind that we like to have a good balance between long and short clips to prevent any bias in training. If you have only short clips (1-3 secs), then your model might suffer for long sentences and if your instances are long, then it might not learn the alignment or might take too long to train the model. + +* **CheckSpectrograms** is to measure the noise level of the clips and find good audio processing parameters. The noise level might be observed by checking spectrograms. If spectrograms look cluttered, especially in silent parts, this dataset might not be a good candidate for a TTS project. If your voice clips are too noisy in the background, it makes things harder for your model to learn the alignment, and the final result might be different than the voice you are given. +If the spectrograms look good, then the next step is to find a good set of audio processing parameters, defined in ```config.json```. In the notebook, you can compare different sets of parameters and see the resynthesis results in relation to the given ground-truth. Find the best parameters that give the best possible synthesis performance. + +Another practical detail is the quantization level of the clips. If your dataset has a very high bit-rate, that might cause slow data-load time and consequently slow training. It is better to reduce the sample-rate of your dataset to around 16000-22050. \ No newline at end of file