-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - add cli tool for QNN artifacts #4731
Conversation
Summary: - cli tool for deploying precompiled model library / context bin onto executorch runtime - refactor & mionr fixes
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4731
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 43371c1 with merge base 54f8932 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @cccclai, this PR is a CLI tool for helping convert pre-generated QNN artifacts ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. Just one question related to the ai hub models
@@ -0,0 +1,102 @@ | |||
# CLI Tool for Compile / Deploy Pre-Built QNN Artifacts | |||
|
|||
An easy-to-use tool for generating / executing .pte program from pre-built model libraries / context binaries from Qualcomm AI Engine Direct. Tool is verified with [host environement](../../../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md#host-os). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it generic for all models from ai hub?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, artifacts from AIHUB related to QNN are delivered with .so
format. Only large generative AI models are shipped with context binaries.
Both of them could be transformed into .pte
program with this tool.
Summary: