From ada00224c9259bfaaa304bb10d22f5394c2bab4a Mon Sep 17 00:00:00 2001 From: preethivenkatesh Date: Tue, 28 May 2024 22:42:58 -0700 Subject: [PATCH] Updated ASR, TTS Readme (#106) * Update README.md * Update README.md Signed-off-by: preethivenkatesh * Update README.md Signed-off-by: preethivenkatesh * Update retrieve and reranking README.md (#101) Signed-off-by: Wang, Xigui Signed-off-by: preethivenkatesh * update docker image name in readme (#99) Signed-off-by: letonghan Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> Signed-off-by: preethivenkatesh --------- Signed-off-by: preethivenkatesh Signed-off-by: Wang, Xigui Signed-off-by: letonghan Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com> Co-authored-by: Letong Han <106566639+letonghan@users.noreply.github.com> Co-authored-by: Sihan Chen <39623753+Spycsh@users.noreply.github.com> --- comps/asr/README.md | 8 ++++---- comps/tts/README.md | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/comps/asr/README.md b/comps/asr/README.md index 0636c2d5b..27f50f6f6 100644 --- a/comps/asr/README.md +++ b/comps/asr/README.md @@ -1,10 +1,10 @@ # ASR Microservice -ASR (Audio-Speech-Recognition) micro-service helps users convert speech to text. When building a talkingbot with LLM, users may need to convert their audio inputs (What they talk, or Input audio from other sources) to text, so LLM is able to tokenize the text and generate the answer. This microservice is built for that conversion stage. +ASR (Audio-Speech-Recognition) microservice helps users convert speech to text. When building a talking bot with LLM, users will need to convert their audio inputs (What they talk, or Input audio from other sources) to text, so the LLM is able to tokenize the text and generate an answer. This microservice is built for that conversion stage. # 🚀Start Microservice with Python -To start the ASR microservice with Python, you need to install python packages first. +To start the ASR microservice with Python, you need to first install python packages. ## Install Requirements @@ -20,7 +20,7 @@ python asr.py # 🚀Start Microservice with Docker -The other way is to start the ASR microservice with Docker. +Alternatively, you can also start the ASR microservice with Docker. ## Build Docker Image @@ -37,7 +37,7 @@ docker run -p 9099:9099 --network=host --ipc=host -e http_proxy=$http_proxy -e h # Test -You can use the following `curl` command to test whether the service is up. Notice that the first request can be slow because it need to pre-download the models. +You can use the following `curl` command to test whether the service is up. Notice that the first request can be slow because it needs to download the models. ```bash curl http://localhost:9099/v1/audio/transcriptions -H "Content-Type: application/json" -d '{"url": "https://github.com/intel/intel-extension-for-transformers/raw/main/intel_extension_for_transformers/neural_chat/assets/audio/sample_2.wav"}' diff --git a/comps/tts/README.md b/comps/tts/README.md index a4e122344..34f8f2106 100644 --- a/comps/tts/README.md +++ b/comps/tts/README.md @@ -1,10 +1,10 @@ # TTS Microservice -TTS (Text-To-Speech) micro-service helps users convert text to speech. When building a talkingbot with LLM, users may need to get the LLM dgenerated answer in audio. This microservice is built for that conversion stage. +TTS (Text-To-Speech) microservice helps users convert text to speech. When building a talking bot with LLM, users might need an LLM generated answer in audio format. This microservice is built for that conversion stage. # 🚀Start Microservice with Python -To start the TTS microservice, you need to install python packages first. +To start the TTS microservice, you need to first install python packages. ## Install Requirements @@ -20,7 +20,7 @@ python tts.py # 🚀Start Microservice with Docker -The other way is to start the ASR microservice with Docker. +Alternatively, you can start the ASR microservice with Docker. ## Build Docker Image @@ -37,7 +37,7 @@ docker run -p 9999:9999 --network=host --ipc=host -e http_proxy=$http_proxy -e h # Test -You can use the following `curl` command to test whether the service is up. Notice that the first request can be slow because it need to pre-download the models. +You can use the following `curl` command to test whether the service is up. Notice that the first request can be slow because it needs to download the models. ```bash curl http://localhost:9999/v1/audio/speech -H "Content-Type: application/json" -d '{"text":"Hello there."}'