-
-
Notifications
You must be signed in to change notification settings - Fork 134
Google COLAB
erew123 edited this page Nov 25, 2024
·
4 revisions
This guide will help you set up and run AllTalk TTS on Google Colab without needing to install anything on your local machine. Please note that Google Colab setup's will just download Piper as a base model and you will need to download other TTS Engine's model files in the Gradio interface as needed.
-
Access the AllTalk TTS Colab Notebook
- Click on this link to open the notebook directly in Google Colab: AllTalk TTS Google Colab Notebook
-
Sign in to Google
- If you're not already signed in, you'll be prompted to sign in to your Google account.
- If you don't have a Google account, you'll need to create one to use Google Colab.
-
Save a Copy to Your Google Drive
- Once the notebook is open in Colab, go to "File" > "Save a copy in Drive".
- This creates a copy of the notebook in your Google Drive, allowing you to run and modify it.
-
Pick your "runtime" type (server type)
- On the free tier, you will want to pick a Python3 environment with a T4 GPU. CPU works too, but will be very slow with some TTS engines.
-
Install Server Requirements
- In the Colab notebook, find the cell titled "Install Server Requirements".
- Read the instructions and options carefully.
- Run this cell by clicking the play button on the left or pressing Shift+Enter.
- This process will take 5-10 minutes to complete.
-
Start AllTalk TTS Server
- After the requirements are installed, find the cell titled "Start AllTalk TTS Server".
- Run this cell to start the AllTalk API and Gradio Web interface.
- The cell will output URLs for accessing the API and interface.
-
Using the AllTalk Interface
- Use the provided URL to access the AllTalk Gradio interface.
- From here, you can download models, generate TTS, and configure settings.
-
API Usage
- The AllTalk API address provided can be used with external applications like Kobold, SillyTavern, or TGWUI's Remote extension for TTS generation.
-
Optional: XTTS Model Finetuning
- If you want to finetune XTTS models, find and run the "Start XTTS model Finetuning" cell.
- This will provide a separate URL for the finetuning interface.
- The Colab runtime may disconnect after periods of inactivity. You may need to rerun cells if this happens.
- For long-running tasks, consider using Colab Pro or a local installation for more stability.
- Always check the AllTalk TTS GitHub repository for the most up-to-date version of the Colab notebook and any additional instructions.
- This notebook requires a GPU runtime. If you're not allocated a GPU, you may need to try again later or consider using Colab Pro for more consistent GPU access.
- For some some reason the 1st TTS generation in the Gradio interface will stutter the first second of the playback.
- There will be a message as the server loads up about Trition. This is ignorable and looks like the following:
2024-11-25 01:49:46.199892: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-11-25 01:49:46.220859: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-11-25 01:49:46.226850: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-25 01:49:46.242405: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-25 01:49:47.930435: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT