Skip to content

Commit

Permalink
[Doc] Add default values for public variables for external models (Pa…
Browse files Browse the repository at this point in the history
…ddlePaddle#441)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* add default values for public variables in comments

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
  • Loading branch information
6 people authored Oct 27, 2022
1 parent 583f16a commit b7e06b8
Show file tree
Hide file tree
Showing 39 changed files with 114 additions and 77 deletions.
10 changes: 7 additions & 3 deletions fastdeploy/vision/classification/contrib/resnet.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,16 @@ class FASTDEPLOY_DECL ResNet : public FastDeployModel {
*/
virtual bool Predict(cv::Mat* im, ClassifyResult* result, int topk = 1);
/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {224, 224}
*/
std::vector<int> size;
/// Mean parameters for normalize, size should be the the same as channels
/*! @brief
Mean parameters for normalize, size should be the the same as channels, default mean_vals = {0.485f, 0.456f, 0.406f}
*/
std::vector<float> mean_vals;
/// Std parameters for normalize, size should be the the same as channels
/*! @brief
Std parameters for normalize, size should be the the same as channels, default std_vals = {0.229f, 0.224f, 0.225f}
*/
std::vector<float> std_vals;


Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/nanodet_plus.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class FASTDEPLOY_DECL NanoDetPlus : public FastDeployModel {
float nms_iou_threshold = 0.5f);

/*! @brief
Argument for image preprocessing step, tuple of input size (width, height), e.g (320, 320)
Argument for image preprocessing step, tuple of input size (width, height), default (320, 320)
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/scaledyolov4.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class FASTDEPLOY_DECL ScaledYOLOv4 : public FastDeployModel {
float nms_iou_threshold = 0.5);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
4 changes: 2 additions & 2 deletions fastdeploy/vision/detection/contrib/yolor.h
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ class FASTDEPLOY_DECL YOLOR : public FastDeployModel {
virtual std::string ModelName() const { return "YOLOR"; }
/** \brief Predict the detection result for an input image
*
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] im The input image data, comes from cv::imread()
* \param[in] result The output detection result will be writen to this structure
* \param[in] conf_threshold confidence threashold for postprocessing, default is 0.25
* \param[in] nms_iou_threshold iou threashold for NMS, default is 0.5
Expand All @@ -50,7 +50,7 @@ class FASTDEPLOY_DECL YOLOR : public FastDeployModel {
float nms_iou_threshold = 0.5);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
6 changes: 4 additions & 2 deletions fastdeploy/vision/detection/contrib/yolov5.h
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ class FASTDEPLOY_DECL YOLOv5 : public FastDeployModel {
float max_wh = 7680.0);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size_;
// padding value, size should be the same as channels
Expand All @@ -96,7 +96,9 @@ class FASTDEPLOY_DECL YOLOv5 : public FastDeployModel {
int stride_;
// for offseting the boxes by classes when using NMS
float max_wh_;
/// for different strategies to get boxes when postprocessing
/*! @brief
Argument for image preprocessing step, for different strategies to get boxes when postprocessing, default true
*/
bool multi_label_;

private:
Expand Down
4 changes: 2 additions & 2 deletions fastdeploy/vision/detection/contrib/yolov5lite.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class FASTDEPLOY_DECL YOLOv5Lite : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down Expand Up @@ -84,7 +84,7 @@ class FASTDEPLOY_DECL YOLOv5Lite : public FastDeployModel {
decode module. Please set it 'true' manually if the model file
was exported with decode module.
false : ONNX files without decode module.
true : ONNX file with decode module.
true : ONNX file with decode module. default false.
*/
bool is_decode_exported;

Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/yolov6.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ class FASTDEPLOY_DECL YOLOv6 : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640};
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/yolov7.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class FASTDEPLOY_DECL YOLOv7 : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/yolov7end2end_ort.h
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class FASTDEPLOY_DECL YOLOv7End2EndORT : public FastDeployModel {
float conf_threshold = 0.25);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/detection/contrib/yolov7end2end_trt.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class FASTDEPLOY_DECL YOLOv7End2EndTRT : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand Down
4 changes: 2 additions & 2 deletions fastdeploy/vision/detection/contrib/yolox.h
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class FASTDEPLOY_DECL YOLOX : public FastDeployModel {
float nms_iou_threshold = 0.5);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand All @@ -61,7 +61,7 @@ class FASTDEPLOY_DECL YOLOX : public FastDeployModel {
whether the model_file was exported with decode module. The official
YOLOX/tools/export_onnx.py script will export ONNX file without
decode module. Please set it 'true' manually if the model file
was exported with decode module.
was exported with decode module. default false.
*/
bool is_decode_exported;
// downsample strides for YOLOX to generate anchors,
Expand Down
2 changes: 1 addition & 1 deletion fastdeploy/vision/facedet/contrib/retinaface.h
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ class FASTDEPLOY_DECL RetinaFace : public FastDeployModel {
*/
std::vector<int> downsample_strides;
/*! @brief
Argument for image postprocessing step, min sizes, width and height for each anchor
Argument for image postprocessing step, min sizes, width and height for each anchor, default min_sizes = {{16, 32}, {64, 128}, {256, 512}}
*/
std::vector<std::vector<int>> min_sizes;
/*! @brief
Expand Down
8 changes: 5 additions & 3 deletions fastdeploy/vision/facedet/contrib/scrfd.h
Original file line number Diff line number Diff line change
Expand Up @@ -77,14 +77,16 @@ class FASTDEPLOY_DECL SCRFD : public FastDeployModel {
*/
int landmarks_per_face;
/*! @brief
Argument for image postprocessing step, the outputs of onnx file with key points features or not
Argument for image postprocessing step, the outputs of onnx file with key points features or not, default true
*/
bool use_kps;
/*! @brief
Argument for image postprocessing step, the upperbond number of boxes processed by nms
Argument for image postprocessing step, the upperbond number of boxes processed by nms, default 30000
*/
int max_nms;
/// Argument for image postprocessing step, anchor number of each stride
/*! @brief
Argument for image postprocessing step, anchor number of each stride, default 2
*/
unsigned int num_anchors;

private:
Expand Down
4 changes: 2 additions & 2 deletions fastdeploy/vision/facedet/contrib/yolov5face.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class FASTDEPLOY_DECL YOLOv5Face : public FastDeployModel {
float nms_iou_threshold = 0.5);

/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {640, 640}
*/
std::vector<int> size;
// padding value, size should be the same as channels
Expand All @@ -72,7 +72,7 @@ class FASTDEPLOY_DECL YOLOv5Face : public FastDeployModel {
/*! @brief
Argument for image postprocessing step, setup the number of landmarks for per face (if have), default 5 in
official yolov5face note that, the outupt tensor's shape must be:
(1,n,4+1+2*landmarks_per_face+1=box+obj+landmarks+cls)
(1,n,4+1+2*landmarks_per_face+1=box+obj+landmarks+cls), default 5
*/
int landmarks_per_face;

Expand Down
8 changes: 6 additions & 2 deletions fastdeploy/vision/faceid/contrib/insightface_rec.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,13 @@ class FASTDEPLOY_DECL InsightFaceRecognitionModel : public FastDeployModel {
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (112, 112)
*/
std::vector<int> size;
/// Argument for image preprocessing step, alpha values for normalization
/*! @brief
Argument for image preprocessing step, alpha values for normalization, default alpha = {1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f};
*/
std::vector<float> alpha;
/// Argument for image preprocessing step, beta values for normalization
/*! @brief
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
*/
std::vector<float> beta;
/*! @brief
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Expand Down
4 changes: 2 additions & 2 deletions fastdeploy/vision/matting/contrib/modnet.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,11 @@ class FASTDEPLOY_DECL MODNet : public FastDeployModel {
*/
std::vector<int> size;
/*! @brief
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels, default alpha = {1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f}
*/
std::vector<float> alpha;
/*! @brief
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels, default beta = {-1.f, -1.f, -1.f}
*/
std::vector<float> beta;
/*! @brief
Expand Down
6 changes: 3 additions & 3 deletions python/fastdeploy/vision/classification/contrib/resnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,21 +56,21 @@ def predict(self, input_image, topk=1):
@property
def size(self):
"""
Returns the preprocess image size
Returns the preprocess image size, default size = [224, 224];
"""
return self._model.size

@property
def mean_vals(self):
"""
Returns the mean value of normlization
Returns the mean value of normlization, default mean_vals = [0.485f, 0.456f, 0.406f];
"""
return self._model.mean_vals

@property
def std_vals(self):
"""
Returns the std value of normlization
Returns the std value of normlization, default std_vals = [0.229f, 0.224f, 0.225f];
"""
return self._model.std_vals

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def predict(self, input_image, topk=1):
@property
def size(self):
"""
Returns the preprocess image size
Returns the preprocess image size, default is (224, 224)
"""
return self._model.size

Expand Down
2 changes: 1 addition & 1 deletion python/fastdeploy/vision/detection/contrib/nanodet_plus.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (320, 320)
"""
return self._model.size

Expand Down
3 changes: 2 additions & 1 deletion python/fastdeploy/vision/detection/contrib/scaled_yolov4.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
2 changes: 1 addition & 1 deletion python/fastdeploy/vision/detection/contrib/yolor.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
5 changes: 4 additions & 1 deletion python/fastdeploy/vision/detection/contrib/yolov5.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def postprocess(infer_result,
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down Expand Up @@ -117,6 +117,9 @@ def max_wh(self):

@property
def multi_label(self):
"""
Argument for image preprocessing step, for different strategies to get boxes when postprocessing, default True
"""
return self._model.multi_label

@size.setter
Expand Down
5 changes: 3 additions & 2 deletions python/fastdeploy/vision/detection/contrib/yolov5lite.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down Expand Up @@ -96,7 +96,8 @@ def is_decode_exported(self):
whether the model_file was exported with decode module.
The official YOLOv5Lite/export.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
false : ONNX files without decode module. true : ONNX file with decode module.
False : ONNX files without decode module. True : ONNX file with decode module.
default False
"""
return self._model.is_decode_exported

Expand Down
2 changes: 1 addition & 1 deletion python/fastdeploy/vision/detection/contrib/yolov6.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
2 changes: 1 addition & 1 deletion python/fastdeploy/vision/detection/contrib/yolov7.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def predict(self, input_image, conf_threshold=0.25):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def predict(self, input_image, conf_threshold=0.25):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand Down
3 changes: 2 additions & 1 deletion python/fastdeploy/vision/detection/contrib/yolox.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

Expand All @@ -71,6 +71,7 @@ def is_decode_exported(self):
whether the model_file was exported with decode module.
The official YOLOX/tools/export_onnx.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
Defalut False.
"""
return self._model.is_decode_exported

Expand Down
4 changes: 2 additions & 2 deletions python/fastdeploy/vision/facedet/contrib/retinaface.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (640, 640)
"""
return self._model.size

Expand All @@ -77,7 +77,7 @@ def downsample_strides(self):
@property
def min_sizes(self):
"""
Argument for image postprocessing step, min sizes, width and height for each anchor
Argument for image postprocessing step, min sizes, width and height for each anchor, default min_sizes = [[16, 32], [64, 128], [256, 512]]
"""
return self._model.min_sizes

Expand Down
Loading

0 comments on commit b7e06b8

Please sign in to comment.