Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[YOLOv8] Add PaddleYOLOv8 models download links #1152

Merged
merged 17 commits into from
Jan 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 35 additions & 3 deletions benchmark/benchmark_ppdet.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,22 @@ def parse_arguments():
parser.add_argument(
"--device",
default="cpu",
help="Type of inference device, support 'cpu' or 'gpu'.")
help="Type of inference device, support 'cpu', 'gpu', 'kunlunxin', 'ascend' etc.")
parser.add_argument(
"--backend",
type=str,
default="default",
help="inference backend, default, ort, ov, trt, paddle, paddle_trt.")
help="inference backend, default, ort, ov, trt, paddle, paddle_trt, lite.")
parser.add_argument(
"--enable_trt_fp16",
type=ast.literal_eval,
default=False,
help="whether enable fp16 in trt backend")
parser.add_argument(
"--enable_lite_fp16",
type=ast.literal_eval,
default=False,
help="whether enable fp16 in lite backend")
parser.add_argument(
"--enable_collect_memory_info",
type=ast.literal_eval,
Expand All @@ -68,6 +73,7 @@ def build_option(args):
device = args.device
backend = args.backend
enable_trt_fp16 = args.enable_trt_fp16
enable_lite_fp16 = args.enable_lite_fp16
option.set_cpu_thread_num(args.cpu_num_thread)
if device == "gpu":
option.use_gpu()
Expand Down Expand Up @@ -111,9 +117,35 @@ def build_option(args):
raise Exception(
"While inference with CPU, only support default/ort/ov/paddle now, {} is not supported.".
format(backend))
elif device == "kunlunxin":
option.use_kunlunxin()
if backend == "lite":
option.use_lite_backend()
elif backend == "ort":
option.use_ort_backend()
elif backend == "paddle":
option.use_paddle_backend()
elif backend == "default":
return option
else:
raise Exception(
"While inference with CPU, only support default/ort/lite/paddle now, {} is not supported.".
format(backend))
elif device == "ascend":
option.use_ascend()
if backend == "lite":
option.use_lite_backend()
if enable_lite_fp16:
option.enable_lite_fp16()
elif backend == "default":
return option
else:
raise Exception(
"While inference with CPU, only support default/lite now, {} is not supported.".
format(backend))
else:
raise Exception(
"Only support device CPU/GPU now, {} is not supported.".format(
"Only support device CPU/GPU/Kunlunxin/Ascend now, {} is not supported.".format(
device))

return option
Expand Down
29 changes: 27 additions & 2 deletions examples/vision/detection/paddledetection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,18 @@ Now FastDeploy supports the deployment of the following models
- [SSD models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
- [YOLOv5 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
- [YOLOv6 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
- [YOLOv7 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
- [YOLOv7 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
- [YOLOv8 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8)
- [RTMDet models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
- [RTMDet models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
- [CascadeRCNN models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
- [PSSDet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
- [RetinaNet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/retinanet)
- [PPYOLOESOD models](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/smalldet)
- [FCOS models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/fcos)
- [TTFNet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ttfnet)
- [TOOD models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/tood)
- [GFL models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/gfl)

## Export Deployment Model

Expand Down Expand Up @@ -58,7 +68,22 @@ The accuracy metric is from model descriptions in PaddleDetection. Refer to them
| [yolov6_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_l_300e_coco.tgz) | 229M | Box AP 51.0%| |
| [yolov6_s_400e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_s_400e_coco.tgz) | 68M | Box AP 43.4%| |
| [yolov7_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_l_300e_coco.tgz) | 145M | Box AP 51.0%| |
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
| [cascade_rcnn_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_fpn_1x_coco.tgz) | 271M | Box AP 41.1%| TensorRT、ORT not supported yet|
| [cascade_rcnn_r50_vd_fpn_ssld_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.tgz) | 271M | Box AP 45.0%| TensorRT、ORT not supported yet|
| [faster_rcnn_enhance_3x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_enhance_3x_coco.tgz) | 119M | Box AP 41.5%| TensorRT、ORT not supported yet|
| [fcos_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/fcos_r50_fpn_1x_coco.tgz) | 129M | Box AP 39.6%| TensorRT not supported yet |
| [gfl_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/gfl_r50_fpn_1x_coco.tgz) | 128M | Box AP 41.0%| TensorRT not supported yet|
| [ppyoloe_crn_l_80e_sliced_visdrone_640_025](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_80e_sliced_visdrone_640_025.tgz) | 200M | Box AP 31.9%| |
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| TensorRT、ORT not supported yet|
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| TensorRT、ORT not supported yet|
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| TensorRT、ORT not supported yet|
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| TensorRT、ORT not supported yet|
| [yolov8_x_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_x_500e_coco.tgz) | 265M | Box AP 53.8%
| [yolov8_l_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_l_500e_coco.tgz) | 173M | Box AP 52.8%
| [yolov8_m_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_m_500e_coco.tgz) | 99M | Box AP 50.2%
| [yolov8_s_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_s_500e_coco.tgz) | 43M | Box AP 44.9%
| [yolov8_n_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_n_500e_coco.tgz) | 13M | Box AP 37.3%

## Detailed Deployment Documents

Expand Down
10 changes: 8 additions & 2 deletions examples/vision/detection/paddledetection/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@
- [SSD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
- [YOLOv5系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
- [YOLOv6系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
- [YOLOv8系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8)
- [RTMDet系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
- [CascadeRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
- [PSSDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
Expand Down Expand Up @@ -78,7 +79,12 @@
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| 暂不支持TensorRT、ORT |
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| 暂不支持TensorRT、ORT |
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| 暂不支持TensorRT、ORT |
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
| [yolov8_x_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_x_500e_coco.tgz) | 265M | Box AP 53.8%
| [yolov8_l_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_l_500e_coco.tgz) | 173M | Box AP 52.8%
| [yolov8_m_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_m_500e_coco.tgz) | 99M | Box AP 50.2%
| [yolov8_s_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_s_500e_coco.tgz) | 43M | Box AP 44.9%
| [yolov8_n_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_n_500e_coco.tgz) | 13M | Box AP 37.3%

## 详细部署文档

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ def build_option(args):
if args.device.lower() == "kunlunxin":
option.use_kunlunxin()

if args.device.lower() == "ascend":
option.use_ascend()

if args.device.lower() == "gpu":
option.use_gpu()

Expand Down
1 change: 1 addition & 0 deletions fastdeploy/vision/detection/ppdet/model.h
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,7 @@ class FASTDEPLOY_DECL PaddleYOLOv8 : public PPDetBase {
valid_cpu_backends = {Backend::OPENVINO, Backend::ORT, Backend::PDINFER, Backend::LITE};
valid_gpu_backends = {Backend::ORT, Backend::PDINFER, Backend::TRT};
valid_kunlunxin_backends = {Backend::LITE};
valid_ascend_backends = {Backend::LITE};
initialized = Initialize();
}

Expand Down