-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RT-DETR TensorRT 输出边框都是0, #8248
Comments
用了 https://aistudio.baidu.com/aistudio/projectdetail/6000200的 inputs_dict 这部分推理代码后问题解决了。
虽然并不懂为什么。我个人认为这个代码是错的,希望能有大神来指导一下。根据https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/EXPORT_MODEL.md,有以下几点不明白:
我原来的代码其实就是以上两点不同,但是不知道为什么。 |
@lyuwenyu 希望大佬能帮忙解释一下谢谢 |
这块的逻辑可以看 https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/post_process.py#L507 |
是的,确实是这两个参数不对,谢谢解答 |
大佬好,遇到了同样的问题(box全零,labels和scores正常),使用了@aliencaocao的推理代码后box变成全是inf。。。 |
问题确认 Search before asking
Bug组件 Bug Component
Inference, Export, Deploy
Bug描述 Describe the Bug
使用教程内的命令转RT-DETR-X到onnx,然后再转到TensorRT。ONNX一切正常,用TensorRTExecutionProvider也正常,但是如果用trtexec转到TensorRT engine,模型输出的class id和confidence看起来正常(实际对不对不知道),但是x1 x2 y1 y2都是0.000。在Windows 10 CUDA 11.8 + CUDNN 8.9和WSL2 Cuda 11.8 + CUDNN 8.9都可以复现。
图片是
复现环境 Environment
Windows 10 和 WSL2
CUDA 11.8
CUDNN 8.9
PaddlePaddle-gpu 2.4.2-post117
PaddleDetection develop
Python 3.9.13和3.10.6都可以复现
Bug描述确认 Bug description confirmation
是否愿意提交PR? Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: