-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorRT #92
Comments
@liuYYYYYYYYy You can use it in NVIDIA's related packages. I believe there is pip inference toolkits like |
Is there no code in this project that can run TensorRT model? |
@liuYYYYYYYYy Currently we only provide conversion codes. However, we do have some verification codes that does inference here. You could use it similarly like here. In the future, we might want to provide onnx/tensorrt execution for full dataset/custom input inference or visualization processes. |
Hi @voldemortX Seniors! Is the FPS of the model in the repository, using TensorRT? |
@Rsweater Not yet, lots of them have conversion issues and I feel like it should be done on a jetson, which I don't have. |
Oh, I see. Thank you very much, senior. |
What should I do after generating the TensorRT model?
The text was updated successfully, but these errors were encountered: