Skip to content

Commit

Permalink
[Model] Add python API for the detection result and modify YOLOv7 docs (
Browse files Browse the repository at this point in the history
PaddlePaddle#708)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (PaddlePaddle#11)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (PaddlePaddle#16)

* Develop (PaddlePaddle#11) (PaddlePaddle#12)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (PaddlePaddle#13)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (PaddlePaddle#14)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (PaddlePaddle#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* python API for detection result

* modify yolov7 docs

* modify python detection api

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
  • Loading branch information
6 people committed Nov 28, 2022
1 parent d8d030b commit c721773
Show file tree
Hide file tree
Showing 5 changed files with 118 additions and 30 deletions.
80 changes: 80 additions & 0 deletions examples/vision/detection/paddledetection/coco_label_list.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
person
bicycle
car
motorcycle
airplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
couch
potted plant
bed
dining table
toilet
tv
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
4 changes: 2 additions & 2 deletions examples/vision/detection/yolov7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
# 导出onnx格式文件 (Tips: 对应 YOLOv7 release v0.1 代码)
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt

# 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件(请暂时不要使用 "--end2end",我们后续将支持带有NMS的ONNX模型的部署)
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
# 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件,并且参考`yolov7end2end_ort` 或 `yolov7end2end_trt`示例使用
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt


```
Expand Down
6 changes: 3 additions & 3 deletions examples/vision/detection/yolov7/README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# YOLOv7 Prepare the model for Deployment

- YOLOv7 deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branching code, and [COCO Pre-Trained Models](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1).

- (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#export ONNX model) operation; *.trt and *.pose models do not support deployment.
- (2)As for YOLOv7 model trained on customized data, please follow the operations guidelines in [Export ONNX model](#Export-ONNX-Model) and then refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Tutorials) to complete the deployment.

Expand All @@ -16,8 +16,8 @@ wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
# Export onnx file (Tips: in accordance with YOLOv7 release v0.1 code)
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt

# If your code supports exporting ONNX files with NMS, please use the following command to export ONNX files (do not use "--end2end" for now. We will support deployment of ONNX models with NMS in the future)
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
# If your code supports exporting ONNX files with NMS, please use the following command to export ONNX files, then refer to the example of `yolov7end2end_ort` or `yolov7end2end_ort`
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt
```

## Download the pre-trained ONNX model
Expand Down
55 changes: 31 additions & 24 deletions fastdeploy/vision/visualize/visualize_pybind.cc
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,17 @@ namespace fastdeploy {
void BindVisualize(pybind11::module& m) {
m.def("vis_detection",
[](pybind11::array& im_data, vision::DetectionResult& result,
float score_threshold, int line_size, float font_size) {
std::vector<std::string>& labels, float score_threshold,
int line_size, float font_size) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisDetection(im, result, score_threshold,
line_size, font_size);
cv::Mat vis_im;
if (labels.empty()) {
vis_im = vision::VisDetection(im, result, score_threshold,
line_size, font_size);
} else {
vis_im = vision::VisDetection(im, result, labels, score_threshold,
line_size, font_size);
}
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
Expand All @@ -40,8 +47,7 @@ void BindVisualize(pybind11::module& m) {
[](pybind11::array& im_data, vision::FaceAlignmentResult& result,
int line_size) {
auto im = PyArrayToCvMat(im_data);
auto vis_im =
vision::VisFaceAlignment(im, result, line_size);
auto vis_im = vision::VisFaceAlignment(im, result, line_size);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
Expand Down Expand Up @@ -86,12 +92,13 @@ void BindVisualize(pybind11::module& m) {
return TensorToPyArray(out);
})
.def("vis_mot",
[](pybind11::array& im_data, vision::MOTResult& result,float score_threshold, vision::tracking::TrailRecorder record) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result, score_threshold, &record);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
[](pybind11::array& im_data, vision::MOTResult& result,
float score_threshold, vision::tracking::TrailRecorder record) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result, score_threshold, &record);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
})
.def("vis_matting",
[](pybind11::array& im_data, vision::MattingResult& result,
Expand All @@ -107,8 +114,7 @@ void BindVisualize(pybind11::module& m) {
[](pybind11::array& im_data, vision::HeadPoseResult& result,
int size, int line_size) {
auto im = PyArrayToCvMat(im_data);
auto vis_im =
vision::VisHeadPose(im, result, size, line_size);
auto vis_im = vision::VisHeadPose(im, result, size, line_size);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
Expand All @@ -131,8 +137,8 @@ void BindVisualize(pybind11::module& m) {
[](pybind11::array& im_data, vision::KeyPointDetectionResult& result,
float conf_threshold) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisKeypointDetection(
im, result, conf_threshold);
auto vis_im =
vision::VisKeypointDetection(im, result, conf_threshold);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
Expand Down Expand Up @@ -194,15 +200,16 @@ void BindVisualize(pybind11::module& m) {
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
})
.def_static("vis_mot",
[](pybind11::array& im_data, vision::MOTResult& result,float score_threshold,
vision::tracking::TrailRecorder* record) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result, score_threshold, record);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
})
.def_static(
"vis_mot",
[](pybind11::array& im_data, vision::MOTResult& result,
float score_threshold, vision::tracking::TrailRecorder* record) {
auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result, score_threshold, record);
FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out);
})
.def_static("vis_matting_alpha",
[](pybind11::array& im_data, vision::MattingResult& result,
bool remove_small_connected_area) {
Expand Down
3 changes: 2 additions & 1 deletion python/fastdeploy/vision/visualize/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,11 @@

def vis_detection(im_data,
det_result,
labels=[],
score_threshold=0.0,
line_size=1,
font_size=0.5):
return C.vision.vis_detection(im_data, det_result, score_threshold,
return C.vision.vis_detection(im_data, det_result, labels, score_threshold,
line_size, font_size)


Expand Down

0 comments on commit c721773

Please sign in to comment.