Skip to content

Commit

Permalink
Rename models to follow icefall convention (#71)
Browse files Browse the repository at this point in the history
* Rename to follow icefall convention

* Fix github CI and docs

* Modify name in docs and README

* Rename wenet ci
  • Loading branch information
ezerhouni authored Jul 22, 2022
1 parent 325a115 commit 0b5b475
Show file tree
Hide file tree
Showing 27 changed files with 48 additions and 48 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/run-streaming-conformer-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ jobs:
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
./sherpa/bin/streaming_conformer_rnnt/streaming_server.py \
./sherpa/bin/streaming_pruned_transducer_statelessX/streaming_server.py \
--port 6006 \
--max-batch-size 50 \
--max-wait-ms 5 \
Expand All @@ -133,7 +133,7 @@ jobs:
- name: Start client
shell: bash
run: |
./sherpa/bin/streaming_conformer_rnnt/streaming_client.py \
./sherpa/bin/streaming_pruned_transducer_statelessX/streaming_client.py \
--server-addr localhost \
--server-port 6006 \
./icefall-asr-librispeech-pruned-stateless-streaming-conformer-rnnt4-2022-06-10/test_wavs/1221-135766-0002.wav
4 changes: 2 additions & 2 deletions .github/workflows/run-streaming-conv-emformer-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ jobs:
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
./sherpa/bin/conv_emformer_transducer_stateless/streaming_server.py \
./sherpa/bin/conv_emformer_transducer_stateless2/streaming_server.py \
--port 6006 \
--max-batch-size 50 \
--max-wait-ms 5 \
Expand All @@ -133,7 +133,7 @@ jobs:
- name: Start client
shell: bash
run: |
./sherpa/bin/conv_emformer_transducer_stateless/streaming_client.py \
./sherpa/bin/conv_emformer_transducer_stateless2/streaming_client.py \
--server-addr localhost \
--server-port 6006 \
./icefall-asr-librispeech-conv-emformer-transducer-stateless2-2022-07-05/test_wavs/1221-135766-0002.wav
4 changes: 2 additions & 2 deletions .github/workflows/run-test-aishell.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ jobs:
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 0 \
--max-batch-size 10 \
Expand All @@ -135,7 +135,7 @@ jobs:
- name: Start client
shell: bash
run: |
sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
./icefall-aishell-pruned-transducer-stateless3-2022-06-20/test_wavs/BAC009S0764W0121.wav \
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/run-test-windows-cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ jobs:
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 0 \
--max-batch-size 10 \
Expand All @@ -99,7 +99,7 @@ jobs:
- name: Start client
shell: bash
run: |
sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/test_wavs/1089-134686-0001.wav \
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/run-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ jobs:
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 0 \
--max-batch-size 10 \
Expand All @@ -134,7 +134,7 @@ jobs:
- name: Start client
shell: bash
run: |
sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/test_wavs/1089-134686-0001.wav \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ jobs:
run: |
export PYTHONPATH=~/tmp/kaldifeat/kaldifeat/python:$PYTHONPATH
export PYTHONPATH=~/tmp/kaldifeat/build/lib:$PYTHONPATH
./sherpa/bin/streaming_conformer_rnnt/streaming_server.py \
./sherpa/bin/streaming_pruned_transducer_statelessX/streaming_server.py \
--port 6006 \
--max-batch-size 50 \
--max-wait-ms 5 \
Expand All @@ -120,7 +120,7 @@ jobs:
- name: Start client
shell: bash
run: |
./sherpa/bin/streaming_conformer_rnnt/streaming_client.py \
./sherpa/bin/streaming_pruned_transducer_statelessX/streaming_client.py \
--server-addr localhost \
--server-port 6006 \
./icefall_asr_wenetspeech_pruned_transducer_stateless5_streaming/test_wavs/DEV_T0000000001.wav
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ following command:
# If you provide a bpe.model, e.g., for LibriSpeech,
# you can use the following command:
#
sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 1 \
--max-batch-size 10 \
Expand All @@ -200,7 +200,7 @@ sherpa/bin/conformer_rnnt/offline_server.py \
# If you provide a tokens.txt, e.g., for aishell,
# you can use the following command:
#
sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 1 \
--max-batch-size 10 \
Expand All @@ -212,7 +212,7 @@ sherpa/bin/conformer_rnnt/offline_server.py \
--token-filename ./path/to/data/lang_char/tokens.txt
```

You can use `./sherpa/bin/conformer_rnnt/offline_server.py --help` to view the help message.
You can use `./sherpa/bin/pruned_transducer_statelessX/offline_server.py --help` to view the help message.

**HINT**: If you don't have GPU, please set `--num-device` to `0`.

Expand All @@ -235,7 +235,7 @@ The following shows how to use the above pretrained models to start the server.
git lfs install
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13

sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 1 \
--max-batch-size 10 \
Expand All @@ -253,7 +253,7 @@ sherpa/bin/conformer_rnnt/offline_server.py \
git lfs install
git clone https://huggingface.co/csukuangfj/icefall-aishell-pruned-transducer-stateless3-2022-06-20

sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6006 \
--num-device 1 \
--max-batch-size 10 \
Expand All @@ -269,21 +269,21 @@ sherpa/bin/conformer_rnnt/offline_server.py \
After starting the server, you can use the following command to start the client:

```bash
./sherpa/bin/conformer_rnnt/offline_client.py \
./sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
/path/to/foo.wav \
/path/to/bar.wav
```

You can use `./sherpa/bin/conformer_rnnt/offline_client.py --help` to view the usage message.
You can use `./sherpa/bin/pruned_transducer_statelessX/offline_client.py --help` to view the usage message.

The following shows how to use the client to send some test waves to the server
for recognition.

```bash
# If you use the pretrained model from the LibriSpeech dataset
sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13//test_wavs/1089-134686-0001.wav \
Expand All @@ -293,7 +293,7 @@ sherpa/bin/conformer_rnnt/offline_client.py \

```bash
# If you use the pretrained model from the aishell dataset
sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6006 \
./icefall-aishell-pruned-transducer-stateless3-2022-06-20/test_wavs/BAC009S0764W0121.wav \
Expand All @@ -303,7 +303,7 @@ sherpa/bin/conformer_rnnt/offline_client.py \

#### RTF test

We provide a demo [./sherpa/bin/conformer_rnnt/decode_manifest.py](./sherpa/bin/conformer_rnnt/decode_manifest.py)
We provide a demo [./sherpa/bin/pruned_transducer_statelessX/decode_manifest.py](./sherpa/bin/pruned_transducer_statelessX/decode_manifest.py)
to decode the `test-clean` dataset from the LibriSpeech corpus.

It creates 50 connections to the server using websockets and sends audio files
Expand Down
12 changes: 6 additions & 6 deletions docs/source/offline_asr/conformer/aishell.rst
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ Start the server
----------------

The entry point of the server is
`sherpa/bin/conformer_rnnt/offline_server.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/offline_server.py>`_.
`sherpa/bin/pruned_transducer_statelessX/offline_server.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/pruned_transducer_statelessX/offline_server.py>`_.

One thing worth mentioning is that the entry point is a Python script.
In `sherpa`_, the server is implemented using `asyncio`_, where **IO-bound**
Expand All @@ -100,12 +100,12 @@ To view the usage information of the server, you can use:

.. code-block:: bash
$ ./sherpa/bin/conformer_rnnt/offline_server.py --help
$ ./sherpa/bin/pruned_transducer_statelessX/offline_server.py --help
which gives the following output:

.. literalinclude:: ./code/offline-server-help.txt
:caption: Output of ``./sherpa/bin/conformer_rnnt/offline_server.py --help``
:caption: Output of ``./sherpa/bin/pruned_transducer_statelessX/offline_server.py --help``

The following shows an example about how to use the above pre-trained model
to start the server:
Expand All @@ -128,16 +128,16 @@ Start the client
----------------

We also provide a Python script
`sherpa/bin/conformer_rnnt/offline_client.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/offline_client.py>`_ for the client.
`sherpa/bin/pruned_transducer_statelessX/offline_client.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/pruned_transducer_statelessX/offline_client.py>`_ for the client.

.. code-block:: bash
./sherpa/bin/conformer_rnnt/offline_client.py --help
./sherpa/bin/pruned_transducer_statelessX/offline_client.py --help
shows the following help information:

.. literalinclude:: ./code/offline-client-help.txt
:caption: Output of ``./sherpa/bin/conformer_rnnt/offline_client.py --help``
:caption: Output of ``./sherpa/bin/pruned_transducer_statelessX/offline_client.py --help``

We provide some test waves in the git repo you just cloned. The following command
shows you how to start the client:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

export CUDA_VISIBLE_DEVICES=0

sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6010 \
./icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/test_wavs/1089-134686-0001.wav \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/offline_asr/conformer/code/start-the-client.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/usr/bin/env bash

sherpa/bin/conformer_rnnt/offline_client.py \
sherpa/bin/pruned_transducer_statelessX/offline_client.py \
--server-addr localhost \
--server-port 6010 \
./icefall-aishell-pruned-transducer-stateless3-2022-06-20/test_wavs/BAC009S0764W0121.wav \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ export CUDA_VISIBLE_DEVICES=0
nn_model_filename=./icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/exp/cpu_jit-torch-1.6.0.pt
bpe_model=./icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/data/lang_bpe_500/bpe.model

sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6010 \
--num-device 1 \
--max-batch-size 10 \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/offline_asr/conformer/code/start-the-server.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ export CUDA_VISIBLE_DEVICES=1
nn_model_filename=./icefall-aishell-pruned-transducer-stateless3-2022-06-20/exp/cpu_jit-epoch-29-avg-5-torch-1.6.0.pt
token_filename=./icefall-aishell-pruned-transducer-stateless3-2022-06-20/data/lang_char/tokens.txt

sherpa/bin/conformer_rnnt/offline_server.py \
sherpa/bin/pruned_transducer_statelessX/offline_server.py \
--port 6010 \
--num-device 1 \
--max-batch-size 10 \
Expand Down
12 changes: 6 additions & 6 deletions docs/source/offline_asr/conformer/librispeech.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Start the server
----------------

The entry point of the server is
`sherpa/bin/conformer_rnnt/offline_server.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/offline_server.py>`_.
`sherpa/bin/pruned_transducer_statelessX/offline_server.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/pruned_transducer_statelessX/offline_server.py>`_.

One thing worth mentioning is that the entry point is a Python script.
In `sherpa`_, the server is implemented using `asyncio`_, where **IO-bound**
Expand All @@ -101,12 +101,12 @@ To view the usage information of the server, you can use:

.. code-block:: bash
$ ./sherpa/bin/conformer_rnnt/offline_server.py --help
$ ./sherpa/bin/pruned_transducer_statelessX/offline_server.py --help
which gives the following output:

.. literalinclude:: ./code/offline-server-help.txt
:caption: Output of ``./sherpa/bin/conformer_rnnt/offline_server.py --help``
:caption: Output of ``./sherpa/bin/pruned_transducer_statelessX/offline_server.py --help``

The following shows an example about how to use the above pre-trained model
to start the server:
Expand All @@ -129,16 +129,16 @@ Start the client
----------------

We also provide a Python script
`sherpa/bin/conformer_rnnt/offline_client.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/offline_client.py>`_ for the client.
`sherpa/bin/pruned_transducer_statelessX/offline_client.py <https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/pruned_transducer_statelessX/offline_client.py>`_ for the client.

.. code-block:: bash
./sherpa/bin/conformer_rnnt/offline_client.py --help
./sherpa/bin/pruned_transducer_statelessX/offline_client.py --help
shows the following help information:

.. literalinclude:: ./code/offline-client-help.txt
:caption: Output of ``./sherpa/bin/conformer_rnnt/offline_client.py --help``
:caption: Output of ``./sherpa/bin/pruned_transducer_statelessX/offline_client.py --help``

We provide some test waves in the git repo you just cloned. The following command
shows you how to start the client:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/streaming_asr/conv_emformer/server.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Usage
.. code-block::
cd /path/to/sherpa
./sherpa/bin/conv_emformer_transducer_stateless/streaming_server.py --help
./sherpa/bin/conv_emformer_transducer_stateless2/streaming_server.py --help
shows the usage message.

Expand Down Expand Up @@ -51,7 +51,7 @@ The following shows you how to start the server with the above pretrained model.
git lfs install
git clone https://huggingface.co/Zengwei/icefall-asr-librispeech-conv-emformer-transducer-stateless2-2022-07-05
./sherpa/bin/conv_emformer_transducer_stateless/streaming_server.py \
./sherpa/bin/conv_emformer_transducer_stateless2/streaming_server.py \
--port 6007 \
--max-batch-size 50 \
--max-wait-ms 5 \
Expand Down
12 changes: 6 additions & 6 deletions sherpa/bin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ where `X>=2`.

| Filename | Description |
|----------|-------------|
| [conformer_rnnt/offline_server.py](./conformer_rnnt/offline_server.py) | The server for offline ASR |
| [conformer_rnnt/offline_client.py](./conformer/offline_client.py) | The client for offline ASR |
| [conformer_rnnt/decode_manifest.py](./conformer_rnnt/decode_manifest.py) | Demo for computing RTF and WER|
| [pruned_transducer_statelessX/offline_server.py](./pruned_transducer_statelessX/offline_server.py) | The server for offline ASR |
| [pruned_transducer_statelessX/offline_client.py](./pruned_transducer_statelessX/offline_client.py) | The client for offline ASR |
| [pruned_transducer_statelessX/decode_manifest.py](./pruned_transducer_statelessX/decode_manifest.py) | Demo for computing RTF and WER|

If you want to test the offline server without training your own model, you
can download pretrained models on the LibriSpeech corpus by visiting
Expand Down Expand Up @@ -42,9 +42,9 @@ where `X>=2`. And the model is trained for streaming recognition.

| Filename | Description |
|----------|-------------|
| [streaming_conformer_rnnt/streaming_conformer_rnnt/streaming_server.py](./streaming_conformer_rnnt/streaming_server.py) | The server for streaming ASR |
| [streaming_conformer_rnnt/streaming_client.py](./streaming_conformer_rnnt/streaming_client.py) | The client for streaming ASR |
| [streaming_conformer_rnnt/decode.py](./streaming_conformer_rnnt/decode.py) | Utilities for streaming ASR|
| [streaming_pruned_transducer_statelessX/streaming_server.py](./streaming_pruned_transducer_statelessX/streaming_server.py) | The server for streaming ASR |
| [streaming_pruned_transducer_statelessX/streaming_client.py](./streaming_pruned_transducer_statelessX/streaming_client.py) | The client for streaming ASR |
| [streaming_pruned_transducer_statelessX/decode.py](./streaming_pruned_transducer_statelessX/decode.py) | Utilities for streaming ASR|

You can use the pretrained model from
<https://huggingface.co/pkufool/icefall-asr-librispeech-pruned-stateless-streaming-conformer-rnnt4-2022-06-10>
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
wav2=./icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/test_wavs/1221-135766-0001.wav
wav3=./icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13/test_wavs/1221-135766-0002.wav
sherpa/bin/conformer_rnnt/offline_asr.py \
sherpa/bin/pruned_transducer_statelessX/offline_asr.py \
--nn-model-filename $nn_model_filename \
--bpe-model $bpe_model \
$wav1 \
Expand All @@ -77,7 +77,7 @@
wav2=./icefall-aishell-pruned-transducer-stateless3-2022-06-20/test_wavs/BAC009S0764W0122.wav
wav3=./icefall-aishell-pruned-transducer-stateless3-2022-06-20/test_wavs/BAC009S0764W0123.wav
sherpa/bin/conformer_rnnt/offline_asr.py \
sherpa/bin/pruned_transducer_statelessX/offline_asr.py \
--nn-model-filename $nn_model_filename \
--token-filename $token_filename \
$wav1 \
Expand Down
File renamed without changes.
File renamed without changes.

0 comments on commit 0b5b475

Please sign in to comment.