We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
from funasr_onnx import Fsmn_vad from pathlib import Path
model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())
model = Fsmn_vad(model_dir)
result = model(wav_path) print(result)和 from funasr_onnx import Paraformer from pathlib import Path
model_dir = r"C:.cache\modelscope\hub\damo\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx" model = Paraformer(model_dir, batch_size=1, quantize=True)
wav_path = ['{}/.cache/modelscope/hub/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'.format(Path.home())]
result = model(wav_path) print(result) 这两个怎么放到一起使用作用于同一个音频,输出既识别出文字又有端点检测, 类似model = AutoModel( model="iic/SenseVoiceSmall", vad_model="iic/speech_fsmn_vad_zh-cn-16k-common-pytorch", vad_kwargs={"max_single_segment_time": 60000},这种效果
The text was updated successfully, but these errors were encountered:
No branches or pull requests
from funasr_onnx import Fsmn_vad
from pathlib import Path
model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())
model = Fsmn_vad(model_dir)
result = model(wav_path)
print(result)和
from funasr_onnx import Paraformer
from pathlib import Path
model_dir = r"C:.cache\modelscope\hub\damo\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx"
model = Paraformer(model_dir, batch_size=1, quantize=True)
wav_path = ['{}/.cache/modelscope/hub/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'.format(Path.home())]
result = model(wav_path)
print(result)
这两个怎么放到一起使用作用于同一个音频,输出既识别出文字又有端点检测,
类似model = AutoModel(
model="iic/SenseVoiceSmall",
vad_model="iic/speech_fsmn_vad_zh-cn-16k-common-pytorch",
vad_kwargs={"max_single_segment_time": 60000},这种效果
The text was updated successfully, but these errors were encountered: