-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adding support for DLA (Deep learning accelerator) modules #26
Comments
That sounds like a good idea! I will try. |
@grimoire I think the DLA cores are only supported in the Jetson Xavier NX, AGX series and more recent GPUs listed here I have been playing around with it today by adding the following lines to here
After doing this, I can see that some layers are now running on the DLA while some layers are incompatible. I also see a large number of warning such as the one below. But this is odd, since I am already using FP16 mode...
Unfortunately, the export process never finishes but crashes instead..! I suspect it may be caused by the older JetPack version I am currently running, as I see there were some DLA support added in the more recent version of JetPack.
I'll keep you updated with my progress! :) |
Cool! I will try to find a device that supports DLA. |
Some platforms including Jetson Xavier AGX and NX supports DLA modules. However, when converting the pytorch module to tensorrt, it will never attempt to use DLA cores but will always use the GPU. This is apparent from the TensorRT log output:
The DLA cores must be enabled by changing the IBuilderConfig
https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/NetworkConfig.html#tensorrt.IBuilderConfig
A good example on how to do this is described by jkjung-avt in this issue
jkjung-avt/tensorrt_demos#463
https://github.com/jkjung-avt/tensorrt_demos/blob/f53b5ae9b004489463a407d8e9b230f39230d051/yolo/onnx_to_tensorrt.py#L165-L170
The text was updated successfully, but these errors were encountered: