Skip to content

Commit

Permalink
docs: streamline readme and reuse content in other docs pages
Browse files Browse the repository at this point in the history
[ci skip]
  • Loading branch information
eginhard committed Dec 12, 2024
1 parent ae2f8d2 commit e38dcbe
Show file tree
Hide file tree
Showing 7 changed files with 235 additions and 397 deletions.
232 changes: 121 additions & 111 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,34 @@
# 🐸Coqui TTS
## News
- 📣 Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)
- 📣 [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.
- 📣 Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 📣 XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 📣 XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).
- 📣 You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
# <img src="https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png" height="56"/>

## <img src="https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png" height="56"/>


**🐸TTS is a library for advanced Text-to-Speech generation.**
**🐸 Coqui TTS is a library for advanced Text-to-Speech generation.**

🚀 Pretrained models in +1100 languages.

🛠️ Tools for training new models and fine-tuning existing models in any language.

📚 Utilities for dataset analysis and curation.
______________________________________________________________________

[![Discord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)](https://pypi.org/project/coqui-tts/)
[![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0)
[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://badge.fury.io/py/coqui-tts)
[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://pypi.org/project/coqui-tts/)
[![Downloads](https://pepy.tech/badge/coqui-tts)](https://pepy.tech/project/coqui-tts)
[![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440)

![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)
![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)
![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml)
[![Docs](<https://readthedocs.org/projects/coqui-tts/badge/?version=latest&style=plastic>)](https://coqui-tts.readthedocs.io/en/latest/)

</div>

______________________________________________________________________
## 📣 News
- **Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)**
- 0.25.0: [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.
- 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 0.19.0: XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).
- 0.14.1: You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.

## 💬 Where to ask questions
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
Expand Down Expand Up @@ -117,8 +112,10 @@ repository are also still a useful source of information.

You can also help us implement more models.

<!-- start installation -->
## Installation
🐸TTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13.**, but should also

🐸TTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13**, but should also
work on Mac and Windows.

If you are only interested in [synthesizing speech](https://coqui-tts.readthedocs.io/en/latest/inference.html) with the pretrained 🐸TTS models, installing from PyPI is the easiest option.
Expand Down Expand Up @@ -159,13 +156,15 @@ pip install -e .[server,ja]

### Platforms

If you are on Ubuntu (Debian), you can also run following commands for installation.
If you are on Ubuntu (Debian), you can also run the following commands for installation.

```bash
make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
make system-deps
make install
```

<!-- end installation -->

## Docker Image
You can also try out Coqui TTS without installation with the docker image.
Simply run the following command and you will be able to run TTS:
Expand All @@ -182,10 +181,10 @@ More details about the docker images (like GPU support) can be found


## Synthesizing speech by 🐸TTS

<!-- start inference -->
### 🐍 Python API

#### Running a multi-speaker and multi-lingual model
#### Multi-speaker and multi-lingual model

```python
import torch
Expand All @@ -197,47 +196,60 @@ device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models
print(TTS().list_models())

# Init TTS
# Initialize TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# List speakers
print(tts.speakers)

# Run TTS
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
# ❗ XTTS supports both, but many models allow only one of the `speaker` and
# `speaker_wav` arguments

# TTS with list of amplitude values as output, clone the voice from `speaker_wav`
wav = tts.tts(
text="Hello world!",
speaker_wav="my/cloning/audio.wav",
language="en"
)

# TTS to a file, use a preset speaker
tts.tts_to_file(
text="Hello world!",
speaker="Craig Gutsy",
language="en",
file_path="output.wav"
)
```

#### Running a single speaker model
#### Single speaker model

```python
# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)
# Initialize TTS with the target model name
tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)

# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

# Example voice cloning with YourTTS in English, French and Portuguese
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
```

#### Example voice conversion
#### Voice conversion (VC)

Converting the voice in `source_wav` to the voice of `target_wav`

```python
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda")
tts.voice_conversion_to_file(
source_wav="my/source.wav",
target_wav="my/target.wav",
file_path="output.wav"
)
```

Other available voice conversion models:
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v1`
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v2`

#### Example voice cloning together with the default voice conversion model.
#### Voice cloning by combining single speaker TTS model with the default VC model

This way, you can clone voices by using any model in 🐸TTS. The FreeVC model is
used for voice conversion after synthesizing speech.
Expand All @@ -252,7 +264,7 @@ tts.tts_with_vc_to_file(
)
```

#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
#### TTS using Fairseq models in ~1100 languages 🤯
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
Expand All @@ -266,128 +278,126 @@ api.tts_to_file(
)
```

### Command-line `tts`
### Command-line interface `tts`

<!-- begin-tts-readme -->

Synthesize speech on the command line.

You can either use your trained model or choose a model from the provided list.

If you don't specify any models, then it uses a Tacotron2 English model trained
on LJSpeech.

#### Single Speaker Models

- List provided models:

```sh
tts --list_models
```
$ tts --list_models
```

- Get model info (for both tts_models and vocoder_models):

- Query by type/name:
The model_info_by_name uses the name as it from the --list_models.
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
- Query by type/idx:
The model_query_idx uses the corresponding idx from --list_models.
```
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
```

For example:
```
$ tts --model_info_by_idx tts_models/3
```
- Get model information. Use the names obtained from `--list_models`.
```sh
tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```sh
tts --model_info_by_name tts_models/tr/common-voice/glow-tts
tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```

- Query info for model info by full name:
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
#### Single speaker models

- Run TTS with default models:
- Run TTS with the default model (`tts_models/en/ljspeech/tacotron2-DDC`):

```
$ tts --text "Text for TTS" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav
```

- Run TTS and pipe out the generated TTS wav file data:

```
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```sh
tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```

- Run a TTS model with its default vocoder model:

```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```

For example:

```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--out_path output/path/speech.wav
```

- Run with specific TTS and vocoder models from the list:
- Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.

```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```

For example:

```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--vocoder_name "vocoder_models/en/ljspeech/univnet" \
--out_path output/path/speech.wav
```

- Run your own TTS model (Using Griffin-Lim Vocoder):
- Run your own TTS model (using Griffin-Lim Vocoder):

```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav
```

- Run your own TTS and Vocoder models:

```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav \
--vocoder_path path/to/vocoder.pth \
--vocoder_config_path path/to/vocoder_config.json
```

#### Multi-speaker Models
#### Multi-speaker models

- List the available speakers and choose a <speaker_id> among them:
- List the available speakers and choose a `<speaker_id>` among them:

```
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```sh
tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```

- Run the multi-speaker TTS model with the target speaker ID:

```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```sh
tts --text "Text for TTS." --out_path output/path/speech.wav \
--model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```

- Run your own multi-speaker TTS model:

```
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav \
--model_path path/to/model.pth --config_path path/to/config.json \
--speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```

### Voice Conversion Models
#### Voice conversion models

```
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
```sh
tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \
--source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
```

<!-- end-tts-readme -->
Loading

0 comments on commit e38dcbe

Please sign in to comment.