You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice: In order to resolve issues more efficiently, please raise issue following the template.
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
❓ Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I fine-tuned the model based on "speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" and successfully generated a new model: "path/to/outputs", which passed offline file testing. I then planned to deploy it to a WebSocket service but found that the script "run_server_2pass.sh" only supports ONNX models. So, I referred to the documentation and exported the new model to an ONNX model, replaced it in the WebSocket service, and modified the model_dir path in run_server_2pass.sh. However, it still didn't succeed.
My questions are:
Can WebSocket directly use the fine-tuned model?
If WebSocket can only use ONNX, how can I export the fine-tuned model to an ONNX format supported by the WebSocket service? (I executed the ONNX export script from the documentation and was able to successfully export the ONNX file, but the WebSocket service throws an exception when reading the ONNX file at startup.)
Notice: In order to resolve issues more efficiently, please raise issue following the template.
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
❓ Questions and Help
Before asking:
What is your question?
I fine-tuned the model based on "speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" and successfully generated a new model: "path/to/outputs", which passed offline file testing. I then planned to deploy it to a WebSocket service but found that the script "run_server_2pass.sh" only supports ONNX models. So, I referred to the documentation and exported the new model to an ONNX model, replaced it in the WebSocket service, and modified the model_dir path in run_server_2pass.sh. However, it still didn't succeed.
My questions are:
Can WebSocket directly use the fine-tuned model?
If WebSocket can only use ONNX, how can I export the fine-tuned model to an ONNX format supported by the WebSocket service? (I executed the ONNX export script from the documentation and was able to successfully export the ONNX file, but the WebSocket service throws an exception when reading the ONNX file at startup.)
我基于“speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch”做了微调,成功后生成了新模型:“path/to/outputs”,离线文件测试成功。
然后我计划部署到websocket服务中,但是发现脚本:“run_server_2pass.sh”中只支持onnx,于是参考文档,将新模型导出为onnx模型,再替换至websocket中,并修改了run_server_2pass.sh中的model_dir的路径,但是依然未成功。
我的问题是:1、websocket是否能直接使用微调后的模型?2、如果websocket只能使用onnx,那么如何将微调后的模型导出websocket服务支持的onnx?(我执行了文档中的导出onnx脚本,能够成功导出onnx文件,但是websocket启动时读取该onnx文件后会抛异常。)
Code
What have you tried?
What's your environment?
pip
, source):The text was updated successfully, but these errors were encountered: