Skip to content

Commit

Permalink
update: README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
sid committed Feb 25, 2024
1 parent 9feeb7d commit 9b6446e
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ docker-compose up -d server && docker-compose ps && docker-compose logs -f
## Installation

**Pre-requisites:**
- GPU VRAM >=16GB
- GPU VRAM >=12GB
- Python >=3.10,<3.12

**Environment setup**
Expand Down Expand Up @@ -61,7 +61,8 @@ python -i fam/llm/fast_inference.py
# Run e.g. of API usage within the interactive python session
tts.synthesise(text="This is a demo of text to speech by MetaVoice-1B, an open-source foundational audio model.", spk_ref_path="assets/bria.mp3")
```
> Note: The script takes 30-90s to startup (depending on hardware). This is because we torch.compile the model for fast inference. Once compiled, the synthesise() API runs faster than real-time, with Real-Time Factor (RTF) < 1.0
> Note: The script takes 30-90s to startup (depending on hardware). This is because we torch.compile the model for fast inference.
> On Ampere, Ada-Lovelace, and Hopper architecture GPUs, once compiled, the synthesise() API runs faster than real-time, with a Real-Time Factor (RTF) < 1.0.
2. Deploy it on any cloud (AWS/GCP/Azure), using our [inference server](serving.py) or [web UI](app.py)
```bash
Expand Down

1 comment on commit 9b6446e

@Danyalkhan3847
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello

Please sign in to comment.