🐸 Coqui TTS is a library for advanced Text-to-Speech generation.
🚀 Pretrained models in +1100 languages.
🛠️ Tools for training new models and fine-tuning existing models in any language.
📚 Utilities for dataset analysis and curation.
- Fork of the original, unmaintained repository. New PyPI package: coqui-tts
- 0.25.0: OpenVoice models now available for voice conversion.
- 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 0.19.0: XTTS fine-tuning code is out. Check the example recipes.
- 0.14.1: You can use Fairseq models in ~1100 languages with 🐸TTS.
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
Type | Platforms |
---|---|
🚨 Bug Reports, Feature Requests & Ideas | GitHub Issue Tracker |
👩💻 Usage Questions | GitHub Discussions |
🗯 General Discussion | GitHub Discussions or Discord |
The issues and discussions in the original repository are also still a useful source of information.
Type | Links |
---|---|
💼 Documentation | ReadTheDocs |
💾 Installation | TTS/README.md |
👩💻 Contributing | CONTRIBUTING.md |
🚀 Released Models | Standard models and Fairseq models in ~1100 languages |
- High-performance text-to-speech and voice conversion models, see list below.
- Fast and efficient model training with detailed training logs on the terminal and Tensorboard.
- Support for multi-speaker and multilingual TTS.
- Released and ready-to-use models.
- Tools to curate TTS datasets under
dataset_analysis/
. - Command line and Python APIs to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
- Tacotron, Tacotron2
- Glow-TTS, SC-GlowTTS
- Speedy-Speech
- Align-TTS
- FastPitch
- FastSpeech, FastSpeech2
- Capacitron
- OverFlow
- Neural HMM TTS
- Delightful TTS
- Attention methods: Guided Attention, Forward Backward Decoding, Graves Attention, Double Decoder Consistency, Dynamic Convolutional Attention, Alignment Network
- Speaker encoders: GE2E, Angular Loss
You can also help us implement more models.
🐸TTS is tested on Ubuntu 24.04 with python >= 3.9, < 3.13, but should also work on Mac and Windows.
If you are only interested in synthesizing speech with the pretrained 🐸TTS models, installing from PyPI is the easiest option.
pip install coqui-tts
If you plan to code or train models, clone 🐸TTS and install it locally.
git clone https://github.com/idiap/coqui-ai-TTS
cd coqui-ai-TTS
pip install -e .
The following extras allow the installation of optional dependencies:
Name | Description |
---|---|
all |
All optional dependencies |
notebooks |
Dependencies only used in notebooks |
server |
Dependencies to run the TTS server |
bn |
Bangla G2P |
ja |
Japanese G2P |
ko |
Korean G2P |
zh |
Chinese G2P |
languages |
All language-specific dependencies |
You can install extras with one of the following commands:
pip install coqui-tts[server,ja]
pip install -e .[server,ja]
If you are on Ubuntu (Debian), you can also run the following commands for installation.
make system-deps
make install
You can also try out Coqui TTS without installation with the docker image. Simply run the following command and you will be able to run TTS:
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models
print(TTS().list_models())
# Initialize TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
# List speakers
print(tts.speakers)
# Run TTS
# ❗ XTTS supports both, but many models allow only one of the `speaker` and
# `speaker_wav` arguments
# TTS with list of amplitude values as output, clone the voice from `speaker_wav`
wav = tts.tts(
text="Hello world!",
speaker_wav="my/cloning/audio.wav",
language="en"
)
# TTS to a file, use a preset speaker
tts.tts_to_file(
text="Hello world!",
speaker="Craig Gutsy",
language="en",
file_path="output.wav"
)
# Initialize TTS with the target model name
tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
Converting the voice in source_wav
to the voice of target_wav
tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda")
tts.voice_conversion_to_file(
source_wav="my/source.wav",
target_wav="my/target.wav",
file_path="output.wav"
)
Other available voice conversion models:
voice_conversion_models/multilingual/multi-dataset/openvoice_v1
voice_conversion_models/multilingual/multi-dataset/openvoice_v2
This way, you can clone voices by using any model in 🐸TTS. The FreeVC model is used for voice conversion after synthesizing speech.
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="output.wav"
)
For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits
.
You can find the language ISO codes here
and learn about the Fairseq models here.
# TTS with fairseq models
api = TTS("tts_models/deu/fairseq/vits")
api.tts_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
file_path="output.wav"
)
Synthesize speech on the command line.
You can either use your trained model or choose a model from the provided list.
-
List provided models:
tts --list_models
-
Get model information. Use the names obtained from
--list_models
.tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
For example:
tts --model_info_by_name tts_models/tr/common-voice/glow-tts tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
-
Run TTS with the default model (
tts_models/en/ljspeech/tacotron2-DDC
):tts --text "Text for TTS" --out_path output/path/speech.wav
-
Run TTS and pipe out the generated TTS wav file data:
tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
-
Run a TTS model with its default vocoder model:
tts --text "Text for TTS" \ --model_name "<model_type>/<language>/<dataset>/<model_name>" \ --out_path output/path/speech.wav
For example:
tts --text "Text for TTS" \ --model_name "tts_models/en/ljspeech/glow-tts" \ --out_path output/path/speech.wav
-
Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.
tts --text "Text for TTS" \ --model_name "<model_type>/<language>/<dataset>/<model_name>" \ --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \ --out_path output/path/speech.wav
For example:
tts --text "Text for TTS" \ --model_name "tts_models/en/ljspeech/glow-tts" \ --vocoder_name "vocoder_models/en/ljspeech/univnet" \ --out_path output/path/speech.wav
-
Run your own TTS model (using Griffin-Lim Vocoder):
tts --text "Text for TTS" \ --model_path path/to/model.pth \ --config_path path/to/config.json \ --out_path output/path/speech.wav
-
Run your own TTS and Vocoder models:
tts --text "Text for TTS" \ --model_path path/to/model.pth \ --config_path path/to/config.json \ --out_path output/path/speech.wav \ --vocoder_path path/to/vocoder.pth \ --vocoder_config_path path/to/vocoder_config.json
-
List the available speakers and choose a
<speaker_id>
among them:tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
-
Run the multi-speaker TTS model with the target speaker ID:
tts --text "Text for TTS." --out_path output/path/speech.wav \ --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
-
Run your own multi-speaker TTS model:
tts --text "Text for TTS" --out_path output/path/speech.wav \ --model_path path/to/model.pth --config_path path/to/config.json \ --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \
--source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>