feat: Add ONNX support for inference acceleration#276
Open
paulsunnypark wants to merge 1 commit intomyshell-ai:mainfrom
Open
feat: Add ONNX support for inference acceleration#276paulsunnypark wants to merge 1 commit intomyshell-ai:mainfrom
paulsunnypark wants to merge 1 commit intomyshell-ai:mainfrom
Conversation
This commit introduces ONNX-based inference capabilities to the project. Key changes include: - Added `onnx` and `onnxruntime` to dependencies. - Created `export_onnx.py` script to convert PyTorch models to ONNX format. This script supports dynamic axes for variable sequence lengths. - Modified `melo/api.py` by adding a `TTS_ONNX` class that uses `onnxruntime` for inference, mirroring the existing `TTS` class structure. - Updated `melo/infer.py` to allow selection between PyTorch and ONNX models via CLI flags (`--use_onnx`, `--onnx_path`). - Added `test/test_onnx_inference.py` to provide basic testing for the ONNX inference pipeline, including model export and audio generation. - Updated `README.md` to document the new ONNX export and inference functionalities, including installation, model conversion, and usage instructions. This allows you to potentially achieve faster inference speeds by converting models to ONNX and using the ONNX Runtime.
|
export_onnx.py is not working, it has an error. Model loaded successfully. Traceback (most recent call last): File "MeloTTS/melo/export_onnx.py", line 89, in export_model_to_onnx( File "MeloTTS/melo/export_onnx.py", line 37, in export_model_to_onnx bert_feature_dim = hps.data.text_encoder.inter_channels # Example, verify actual attribute ^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'HParams' object has no attribute 'text_encoder' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This commit introduces ONNX-based inference capabilities to the project.
Key changes include:
onnxandonnxruntimeto dependencies.export_onnx.pyscript to convert PyTorch models to ONNX format. This script supports dynamic axes for variable sequence lengths.melo/api.pyby adding aTTS_ONNXclass that usesonnxruntimefor inference, mirroring the existingTTSclass structure.melo/infer.pyto allow selection between PyTorch and ONNX models via CLI flags (--use_onnx,--onnx_path).test/test_onnx_inference.pyto provide basic testing for the ONNX inference pipeline, including model export and audio generation.README.mdto document the new ONNX export and inference functionalities, including installation, model conversion, and usage instructions.This allows you to potentially achieve faster inference speeds by converting models to ONNX and using the ONNX Runtime.