Skip to content

Commit 0859e9f

Browse files
authored
Remove Coqui Studio references
1 parent 934b87b commit 0859e9f

File tree

1 file changed

+0
-34
lines changed

1 file changed

+0
-34
lines changed

README.md

-34
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,6 @@
77
- 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with unconstrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html)
88
- 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
99
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html)
10-
- 📣 **Coqui Studio API** is landed on 🐸TTS. - [Example](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api)
11-
- 📣 [**Coqui Studio API**](https://docs.coqui.ai/docs) is live.
12-
- 📣 Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin)!! - [Blog Post](https://coqui.ai/blog/tts/prompt-to-voice)
13-
- 📣 Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
14-
- 📣 Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
1510

1611
<div align="center">
1712
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" />
@@ -253,29 +248,6 @@ tts.tts_with_vc_to_file(
253248
)
254249
```
255250

256-
#### Example using [🐸Coqui Studio](https://coqui.ai) voices.
257-
You access all of your cloned voices and built-in speakers in [🐸Coqui Studio](https://coqui.ai).
258-
To do this, you'll need an API token, which you can obtain from the [account page](https://coqui.ai/account).
259-
After obtaining the API token, you'll need to configure the COQUI_STUDIO_TOKEN environment variable.
260-
261-
Once you have a valid API token in place, the studio speakers will be displayed as distinct models within the list.
262-
These models will follow the naming convention `coqui_studio/en/<studio_speaker_name>/coqui_studio`
263-
264-
```python
265-
# XTTS model
266-
models = TTS(cs_api_model="XTTS").list_models()
267-
# Init TTS with the target studio speaker
268-
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False)
269-
# Run TTS
270-
tts.tts_to_file(text="This is a test.", language="en", file_path=OUTPUT_PATH)
271-
272-
# V1 model
273-
models = TTS(cs_api_model="V1").list_models()
274-
# Run TTS with emotion and speed control
275-
# Emotion control only works with V1 model
276-
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
277-
```
278-
279251
#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
280252
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
281253
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
@@ -351,12 +323,6 @@ If you don't specify any models, then it uses LJSpeech based English model.
351323
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
352324
```
353325
354-
- Run TTS and define speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0:
355-
356-
```
357-
$ tts --text "Text for TTS" --model_name "coqui_studio/<language>/<dataset>/<model_name>" --speed 1.2 --out_path output/path/speech.wav
358-
```
359-
360326
- Run a TTS model with its default vocoder model:
361327
362328
```

0 commit comments

Comments
 (0)