Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

picodet预测速度缓慢 #4569

Open
Monday-Leo opened this issue Nov 12, 2021 · 5 comments
Open

picodet预测速度缓慢 #4569

Monday-Leo opened this issue Nov 12, 2021 · 5 comments
Labels
deploy Deployment including inference, lite and serving question Further information is requested

Comments

@Monday-Leo
Copy link

训练picodet_l_640后使用export_model.py导出,模型大小为13M,使用deploy/python/infer.py预测视频,需要61ms每帧,使用的是GTX1650独立显卡,paddle2.1.3,与官方提供的预测速度相差很大,请问该如何提高?

@qingqing01
Copy link
Collaborator

PicoDet是主打移动端ARM CPU。GPU的话,还是考虑PPYOLO Mbv3等模型。

@qingqing01 qingqing01 added question Further information is requested deploy Deployment including inference, lite and serving labels Nov 13, 2021
@Monday-Leo
Copy link
Author

使用arm cpu预测速度会比GPU快吗?为什么呢?

@yghstill
Copy link
Collaborator

@lhy823436493 Nvidia GPU对depth-wise 5x5卷积支持的不是很好,PicoDet专门针对ARM、CPU开发的,后续GPU上模型我们会做进一步优化。

@Monday-Leo
Copy link
Author

感谢!!

@Monday-Leo
Copy link
Author

请问后面会出picodet tensorrt的部署方案吗?想在jeston nano上部署模型。@yghstill

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deploy Deployment including inference, lite and serving question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants