Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Guide: Real Time Speech Recognition #9349

Merged
merged 6 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion demo/asr/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,19 @@

def transcribe(audio):
sr, y = audio

# Convert to mono if stereo
if y.ndim > 1:
y = y.mean(axis=1)

y = y.astype(np.float32)
y /= np.max(np.abs(y))

return transcriber({"sampling_rate": sr, "raw": y})["text"] # type: ignore

demo = gr.Interface(
transcribe,
gr.Audio(sources=["microphone"]),
gr.Audio(sources="microphone"),
"text",
)

Expand Down
5 changes: 5 additions & 0 deletions demo/stream_asr/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,11 @@

def transcribe(stream, new_chunk):
sr, y = new_chunk

# Convert to mono if stereo
if y.ndim > 1:
y = y.mean(axis=1)

y = y.astype(np.float32)
y /= np.max(np.abs(y))

Expand Down
8 changes: 3 additions & 5 deletions guides/09_other-tutorials/real-time-speech-recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This tutorial will show how to take a pretrained speech-to-text model and deploy

Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:

- Transformers (for this, `pip install transformers` and `pip install torch`)
- Transformers (for this, `pip install torch transformers torchaudio`)

Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.

Expand Down Expand Up @@ -61,10 +61,8 @@ Take a look below.

$code_stream_asr

Notice now we have a state variable now, because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio that has been spoken so far in state.
As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in `stream`, as well as the new chunk of audio as `new_chunk`. We return the new full audio so that can be stored back in state, and we also return the transcription.
Here we naively append the audio together and simply call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio received.
Notice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received.

$demo_stream_asr

Now the ASR model will run inference as you speak!
Now the ASR model will run inference as you speak!
Loading