Skip to content

Commit

Permalink
Modify documents (PaddlePaddle#110)
Browse files Browse the repository at this point in the history
* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (PaddlePaddle#11)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (PaddlePaddle#16)

* Develop (PaddlePaddle#11) (PaddlePaddle#12)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (PaddlePaddle#13)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (PaddlePaddle#14)

* Fix compile problem in different python version (PaddlePaddle#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (PaddlePaddle#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (PaddlePaddle#22)

* first commit test photo

* yolov7 doc

* yolov7 doc

* yolov7 doc

* yolov7 doc

* add yolov5 docs

* modify yolov5 doc

* first commit for retinaface

* first commit for retinaface

* firt commit for ultraface

* firt commit for ultraface

* firt commit for yolov5face

* firt commit for modnet and arcface

* firt commit for modnet and arcface

* first commit for partial_fc

* first commit for partial_fc

* first commit for yolox

* first commit for yolov6

* first commit for nano_det

* first commit for scrfd

* first commit for scrfd

* first commit for retinaface

* first commit for ultraface

* first commit for yolov5face

* first commit for yolox yolov6 nano

* rm jpg

* first commit for modnet and modify nano

* yolor scaledyolov4 v5lite

* first commit for insightface

* first commit for insightface

* first commit for insightface

* docs

* docs

* docs

* docs

* docs

* add print for detect and modify docs

* docs

* docs

* docs

* docs test for insightface

* docs test for insightface again

* docs test for insightface

* modify all wrong expressions in docs

* modify all wrong expressions in docs

* modify all wrong expressions in docs

* modify all wrong expressions in docs

* modify docs expressions

* fix expression of detection part

* fix expression of detection part

* fix expression of detection part

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
  • Loading branch information
6 people committed Aug 15, 2022
1 parent 85bd24a commit 47ce3c7
Show file tree
Hide file tree
Showing 22 changed files with 60 additions and 61 deletions.
24 changes: 15 additions & 9 deletions examples/vision/detection/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
人脸检测模型
# 目标检测模型

FastDeploy目前支持如下目标检测模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [nanodet_plus](./nanodet_plus) | NanoDetPlus系列模型 | ONNX | Release/v1.0.0-alpha-1 |
| [yolov5](./yolov5) | YOLOv5系列模型 | ONNX | Release/v6.0 |
| [yolov5lite](./yolov5lite) | YOLOv5-Lite系列模型 | ONNX | Release/v1.4 |
| [yolov6](./yolov6) | YOLOv6系列模型 | ONNX | Release/0.1.0 |
| [yolov7](./yolov7) | YOLOv7系列模型 | ONNX | Release/0.1 |
| [yolor](./yolor) | YOLOR系列模型 | ONNX | Release/weights |
| [yolox](./yolox) | YOLOX系列模型 | ONNX | Release/v0.1.1 |
| [scaledyolov4](./scaledyolov4) | ScaledYOLOv4系列模型 | ONNX | CommitID:6768003 |
| [PaddleDetection/PPYOLOE](./paddledetection) | PPYOLOE系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/PicoDet](./paddledetection) | PicoDet系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/YOLOX](./paddledetection) | Paddle版本的YOLOX系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/YOLOv3](./paddledetection) | YOLOv3系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/PPYOLO](./paddledetection) | PPYOLO系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/FasterRCNN](./paddledetection) | FasterRCNN系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [WongKinYiu/YOLOv7](./yolov7) | YOLOv7、YOLOv7-X等系列模型 | ONNX | [Release/v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) |
| [RangiLyu/NanoDetPlus](./nanodet_plus) | NanoDetPlus 系列模型 | ONNX | [Release/v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) |
| [ultralytics/YOLOv5](./yolov5) | YOLOv5 系列模型 | ONNX | [Release/v6.0](https://github.com/ultralytics/yolov5/tree/v6.0) |
| [ppogg/YOLOv5-Lite](./yolov5lite) | YOLOv5-Lite 系列模型 | ONNX | [Release/v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) |
| [meituan/YOLOv6](./yolov6) | YOLOv6 系列模型 | ONNX | [Release/0.1.0](https://github.com/meituan/YOLOv6/releases/download/0.1.0) |
| [WongKinYiu/YOLOR](./yolor) | YOLOR 系列模型 | ONNX | [Release/weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) |
| [Megvii-BaseDetection/YOLOX](./yolox) | YOLOX 系列模型 | ONNX | [Release/v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) |
| [WongKinYiu/ScaledYOLOv4](./scaledyolov4) | ScaledYOLOv4 系列模型 | ONNX | [CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) |
5 changes: 2 additions & 3 deletions examples/vision/detection/nanodet_plus/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
# NanoDetPlus准备部署模型

## 模型版本说明

- NanoDetPlus部署实现来自[NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 的代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)

- (1)[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)*.onnx可直接进行部署;
- (2)自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。
- (1)[官方库](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)提供的*.onnx可直接进行部署;
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。

## 下载预训练ONNX模型

Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/scaledyolov4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

- ScaledYOLOv4部署实现来自[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)

- (1)[预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署
- (1)[官方库](https://github.com/WongKinYiu/ScaledYOLOv4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)自己数据训练的ScaledYOLOv4模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。


Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

- YOLOR部署实现来自[YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)

- (1)[预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署
- (1)[官方库](https://github.com/WongKinYiu/yolor/releases/tag/weights)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)自己数据训练的YOLOR模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。


Expand Down
4 changes: 1 addition & 3 deletions examples/vision/detection/yolov5/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# YOLOv5准备部署模型

## 模型版本说明

- YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
- (1)[预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)*.onnx可直接进行部署;
- (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v6.0)提供的*.onnx可直接进行部署;
- (2)开发者基于自己数据训练的YOLOv5 v6.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后后,完成部署。


Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov5lite/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
- YOLOv5Lite部署实现来自[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
代码,和[基于COCO的预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)

- (1)[预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署
- (1)[官方库](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)自己数据训练的YOLOv5Lite模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。


Expand Down
5 changes: 2 additions & 3 deletions examples/vision/detection/yolov6/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
# YOLOv6准备部署模型

## 模型版本说明

- YOLOv6 部署实现来自[YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0),和[基于coco的预训练模型](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)

- (1)[基于coco的预训练模型](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)*.onnx可直接进行部署;
- (2)自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。
- (1)[官方库](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)提供的*.onnx可直接进行部署;
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。



Expand Down
6 changes: 4 additions & 2 deletions examples/vision/detection/yolov7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@

- YOLOv7部署实现来自[YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1)分支代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)

- (1)[预训练模型](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署;
- (2)自己数据训练的YOLOv7 0.1模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
- (1)[官方库](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.trt和*.pose模型不支持部署;
- (2)自己数据训练的YOLOv7模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。




## 导出ONNX模型
Expand Down
6 changes: 3 additions & 3 deletions examples/vision/detection/yolox/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# YOLOX准备部署模型

## 模型版本说明

- YOLOX部署实现来自[YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0),基于[coco的预训练模型](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)

- (1)[预训练模型](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)中的*.pth通过导出ONNX模型操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署;
- (2)开发者基于自己数据训练的YOLOX v0.1.1模型,可按照导出ONNX模型后,完成部署。
- (1)[官方库](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)提供中的*.pth通过导出ONNX模型操作后,可进行部署;
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。



## 下载预训练ONNX模型
Expand Down
8 changes: 4 additions & 4 deletions examples/vision/facedet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ FastDeploy目前支持如下人脸检测模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [retinaface](./retinaface) | RetinaFace系列模型 | ONNX | CommitID:b984b4b |
| [ultraface](./ultraface) | UltraFace系列模型 | ONNX |CommitID:dffdddd |
| [yolov5face](./yolov5face) | YOLOv5Face系列模型 | ONNX | CommitID:4fd1ead |
| [scrfd](./scrfd) | SCRFD系列模型 | ONNX | CommitID:17cdeab |
| [biubug6/RetinaFace](./retinaface) | RetinaFace 系列模型 | ONNX | [CommitID:b984b4b](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b) |
| [Linzaer/UltraFace](./ultraface) | UltraFace 系列模型 | ONNX |[CommitID:dffdddd](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd) |
| [deepcam-cn/YOLOv5Face](./yolov5face) | YOLOv5Face 系列模型 | ONNX | [CommitID:4fd1ead](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead) |
| [deepinsight/SCRFD](./scrfd) | SCRFD 系列模型 | ONNX | [CommitID:17cdeab](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950) |
7 changes: 3 additions & 4 deletions examples/vision/facedet/retinaface/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
# RetinaFace准备部署模型

## 模型版本说明

- [RetinaFace](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b)
- (1)[链接中](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)自己数据训练的RetinaFace CommitID:b984b4b模型,可按照[导出ONNX模型](#导出ONNX模型)后,完成部署。
- (1)[官方库](https://github.com/biubug6/Pytorch_Retinaface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)自己数据训练的RetinaFace模型,可按照[导出ONNX模型](#导出ONNX模型)后,完成部署。


## 导出ONNX模型

Expand Down
6 changes: 3 additions & 3 deletions examples/vision/facedet/scrfd/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# SCRFD准备部署模型

## 模型版本说明

- [SCRFD](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950)
- (1)[链接中](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的SCRFD CID:17cdeab模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
- (1)[官方库](https://github.com/deepinsight/insightface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的SCRFD模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。


## 导出ONNX模型

Expand Down
5 changes: 3 additions & 2 deletions examples/vision/facedet/ultraface/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
# UltraFace准备部署模型

## 模型版本说明

- [UltraFace](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd)
- (1)[链接中](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd)*.onnx可下载, 也可以通过下面模型链接下载并进行部署
- (1)[官方库](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/)中提供的*.onnx可下载, 也可以通过下面模型链接下载并进行部署
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。



## 下载预训练ONNX模型
Expand Down
4 changes: 1 addition & 3 deletions examples/vision/facedet/yolov5face/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# YOLOv5Face准备部署模型

## 模型版本说明

- [YOLOv5Face](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead)
- (1)[链接中](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (1)[官方库](https://github.com/deepcam-cn/yolov5-face/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的YOLOv5Face模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。

## 导出ONNX模型
Expand Down
11 changes: 6 additions & 5 deletions examples/vision/faceid/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
人脸检测模型
# 人脸识别模型


FastDeploy目前支持如下人脸识别模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [arcface](./insightface) | ArcFace系列模型 | ONNX | CommitID:babb9a5 |
| [cosface](./insightface) | CosFace系列模型 | ONNX | CommitID:babb9a5 |
| [partial_fc](./insightface) | PartialFC系列模型 | ONNX | CommitID:babb9a5 |
| [vpl](./insightface) | VPL系列模型 | ONNX | CommitID:babb9a5 |
| [deepinsight/ArcFace](./insightface) | ArcFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/CosFace](./insightface) | CosFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/PartialFC](./insightface) | PartialFC 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
| [deepinsight/VPL](./insightface) | VPL 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) |
4 changes: 1 addition & 3 deletions examples/vision/faceid/insightface/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# InsightFace准备部署模型

## 模型版本说明

- [InsightFace](https://github.com/deepinsight/insightface/commit/babb9a5)
- (1)[链接中](https://github.com/deepinsight/insightface/commit/babb9a5)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (1)[官方库](https://github.com/deepinsight/insightface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的InsightFace模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。


Expand Down
2 changes: 1 addition & 1 deletion examples/vision/faceid/insightface/python/infer_arcface.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def parse_arguments():
import ast
parser = argparse.ArgumentParser()
parser.add_argument(
"--model", required=True, help="Path of scrfd onnx model.")
"--model", required=True, help="Path of insgihtface onnx model.")
parser.add_argument(
"--face", required=True, help="Path of test face image file.")
parser.add_argument(
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/faceid/insightface/python/infer_cosface.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def parse_arguments():
import ast
parser = argparse.ArgumentParser()
parser.add_argument(
"--model", required=True, help="Path of scrfd onnx model.")
"--model", required=True, help="Path of insightface onnx model.")
parser.add_argument(
"--face", required=True, help="Path of test face image file.")
parser.add_argument(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def parse_arguments():
import ast
parser = argparse.ArgumentParser()
parser.add_argument(
"--model", required=True, help="Path of scrfd onnx model.")
"--model", required=True, help="Path of insightface onnx model.")
parser.add_argument(
"--face", required=True, help="Path of test face image file.")
parser.add_argument(
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/faceid/insightface/python/infer_vpl.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def parse_arguments():
import ast
parser = argparse.ArgumentParser()
parser.add_argument(
"--model", required=True, help="Path of scrfd onnx model.")
"--model", required=True, help="Path of insightface onnx model.")
parser.add_argument(
"--face", required=True, help="Path of test face image file.")
parser.add_argument(
Expand Down
6 changes: 3 additions & 3 deletions examples/vision/matting/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
人脸检测模型
# 抠图模型

FastDeploy目前支持如下人脸识别模型部署
FastDeploy目前支持如下抠图模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [modnet](./modnet) | MODNet系列模型 | ONNX | CommitID:28165a4 |
| [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) |
6 changes: 2 additions & 4 deletions examples/vision/matting/modnet/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# MODNet准备部署模型

## 模型版本说明

- [MODNet](https://github.com/ZHKKKe/MODNet/commit/28165a4)
- (1)[链接中](https://github.com/ZHKKKe/MODNet/commit/28165a4)*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的MODNet CommitID:b984b4b模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
- (1)[官方库](https://github.com/ZHKKKe/MODNet/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- (2)开发者基于自己数据训练的MODNet模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。

## 导出ONNX模型

Expand Down

0 comments on commit 47ce3c7

Please sign in to comment.