Skip to content

Commit

Permalink
Modify PPMatting backend and docs (PaddlePaddle#182)
Browse files Browse the repository at this point in the history
* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
  • Loading branch information
6 people authored Sep 7, 2022
1 parent f96fdad commit 7e00c5f
Show file tree
Hide file tree
Showing 13 changed files with 193 additions and 12 deletions.
1 change: 1 addition & 0 deletions csrc/fastdeploy/vision/common/processors/limit_short.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ class LimitShort : public Processor {

static bool Run(Mat* mat, int max_short = -1, int min_short = -1,
ProcLib lib = ProcLib::OPENCV_CPU);
int GetMaxShort() { return max_short_; }

private:
int max_short_;
Expand Down
78 changes: 78 additions & 0 deletions csrc/fastdeploy/vision/common/processors/resize_by_long.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision/common/processors/resize_by_long.h"

namespace fastdeploy {
namespace vision {

bool ResizeByLong::CpuRun(Mat* mat) {
cv::Mat* im = mat->GetCpuMat();
int origin_w = im->cols;
int origin_h = im->rows;
double scale = GenerateScale(origin_w, origin_h);
if (use_scale_) {
cv::resize(*im, *im, cv::Size(), scale, scale, interp_);
} else {
int width = static_cast<int>(round(scale * im->cols));
int height = static_cast<int>(round(scale * im->rows));
cv::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
}
mat->SetWidth(im->cols);
mat->SetHeight(im->rows);
return true;
}

#ifdef ENABLE_OPENCV_CUDA
bool ResizeByLong::GpuRun(Mat* mat) {
cv::cuda::GpuMat* im = mat->GetGpuMat();
int origin_w = im->cols;
int origin_h = im->rows;
double scale = GenerateScale(origin_w, origin_h);
im->convertTo(*im, CV_32FC(im->channels()));
if (use_scale_) {
cv::cuda::resize(*im, *im, cv::Size(), scale, scale, interp_);
} else {
int width = static_cast<int>(round(scale * im->cols));
int height = static_cast<int>(round(scale * im->rows));
cv::cuda::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
}
mat->SetWidth(im->cols);
mat->SetHeight(im->rows);
return true;
}
#endif

double ResizeByLong::GenerateScale(const int origin_w, const int origin_h) {
int im_size_max = std::max(origin_w, origin_h);
int im_size_min = std::min(origin_w, origin_h);
double scale = 1.0f;
if (target_size_ == -1) {
if (im_size_max > max_size_) {
scale = static_cast<double>(max_size_) / static_cast<double>(im_size_max);
}
} else {
scale =
static_cast<double>(target_size_) / static_cast<double>(im_size_max);
}
return scale;
}

bool ResizeByLong::Run(Mat* mat, int target_size, int interp, bool use_scale,
int max_size, ProcLib lib) {
auto r = ResizeByLong(target_size, interp, use_scale, max_size);
return r(mat, lib);
}
} // namespace vision
} // namespace fastdeploy
49 changes: 49 additions & 0 deletions csrc/fastdeploy/vision/common/processors/resize_by_long.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "fastdeploy/vision/common/processors/base.h"

namespace fastdeploy {
namespace vision {

class ResizeByLong : public Processor {
public:
ResizeByLong(int target_size, int interp = 1, bool use_scale = true,
int max_size = -1) {
target_size_ = target_size;
max_size_ = max_size;
interp_ = interp;
use_scale_ = use_scale;
}
bool CpuRun(Mat* mat);
#ifdef ENABLE_OPENCV_CUDA
bool GpuRun(Mat* mat);
#endif
std::string Name() { return "ResizeByLong"; }

static bool Run(Mat* mat, int target_size, int interp = 1,
bool use_scale = true, int max_size = -1,
ProcLib lib = ProcLib::OPENCV_CPU);

private:
double GenerateScale(const int origin_w, const int origin_h);
int target_size_;
int max_size_;
int interp_;
bool use_scale_;
};
} // namespace vision
} // namespace fastdeploy
1 change: 1 addition & 0 deletions csrc/fastdeploy/vision/common/processors/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
#include "fastdeploy/vision/common/processors/pad.h"
#include "fastdeploy/vision/common/processors/pad_to_size.h"
#include "fastdeploy/vision/common/processors/resize.h"
#include "fastdeploy/vision/common/processors/resize_by_long.h"
#include "fastdeploy/vision/common/processors/resize_by_short.h"
#include "fastdeploy/vision/common/processors/resize_to_int_mult.h"
#include "fastdeploy/vision/common/processors/stride_pad.h"
54 changes: 51 additions & 3 deletions csrc/fastdeploy/vision/matting/ppmatting/ppmatting.cc
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ PPMatting::PPMatting(const std::string& model_file,
const RuntimeOption& custom_option,
const Frontend& model_format) {
config_file_ = config_file;
valid_cpu_backends = {Backend::PDINFER, Backend::ORT};
valid_gpu_backends = {Backend::PDINFER, Backend::ORT, Backend::TRT};
valid_cpu_backends = {Backend::ORT, Backend::PDINFER};
valid_gpu_backends = {Backend::PDINFER, Backend::TRT};
runtime_option = custom_option;
runtime_option.model_format = model_format;
runtime_option.model_file = model_file;
Expand Down Expand Up @@ -74,6 +74,11 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
if (op["min_short"]) {
min_short = op["min_short"].as<int>();
}
FDINFO << "Detected LimitShort processing step in yaml file, if the "
"model is exported from PaddleSeg, please make sure the "
"input of your model is fixed with a square shape, and "
"greater than or equal to "
<< max_short << "." << std::endl;
processors_.push_back(
std::make_shared<LimitShort>(max_short, min_short));
} else if (op["type"].as<std::string>() == "ResizeToIntMult") {
Expand All @@ -92,6 +97,19 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
std = op["std"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Normalize>(mean, std));
} else if (op["type"].as<std::string>() == "ResizeByLong") {
int target_size = op["target_size"].as<int>();
processors_.push_back(std::make_shared<ResizeByLong>(target_size));
} else if (op["type"].as<std::string>() == "Pad") {
// size: (w, h)
auto size = op["size"].as<std::vector<int>>();
std::vector<float> value = {127.5, 127.5, 127.5};
if (op["fill_value"]) {
auto value = op["fill_value"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Cast>("float"));
processors_.push_back(
std::make_shared<PadToSize>(size[1], size[0], value));
}
}
processors_.push_back(std::make_shared<HWC2CHW>());
Expand All @@ -102,11 +120,30 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
bool PPMatting::Preprocess(Mat* mat, FDTensor* output,
std::map<std::string, std::array<int, 2>>* im_info) {
for (size_t i = 0; i < processors_.size(); ++i) {
if (processors_[i]->Name().compare("LimitShort") == 0) {
int input_h = static_cast<int>(mat->Height());
int input_w = static_cast<int>(mat->Width());
auto processor = dynamic_cast<LimitShort*>(processors_[i].get());
int max_short = processor->GetMaxShort();
if (runtime_option.backend != Backend::PDINFER) {
if (input_w != input_h || input_h < max_short || input_w < max_short) {
FDWARNING << "Detected LimitShort processing step in yaml file and "
"the size of input image is Unqualified, Fastdeploy "
"will resize the input image into ("
<< max_short << "," << max_short << ")." << std::endl;
Resize::Run(mat, max_short, max_short);
}
}
}
if (!(*(processors_[i].get()))(mat)) {
FDERROR << "Failed to process image data in " << processors_[i]->Name()
<< "." << std::endl;
return false;
}
if (processors_[i]->Name().compare("ResizeByLong") == 0) {
(*im_info)["resize_by_long"] = {static_cast<int>(mat->Height()),
static_cast<int>(mat->Width())};
}
}

// Record output shape of preprocessed image
Expand Down Expand Up @@ -135,6 +172,7 @@ bool PPMatting::Postprocess(
// 先获取alpha并resize (使用opencv)
auto iter_ipt = im_info.find("input_shape");
auto iter_out = im_info.find("output_shape");
auto resize_by_long = im_info.find("resize_by_long");
FDASSERT(iter_out != im_info.end() && iter_ipt != im_info.end(),
"Cannot find input_shape or output_shape from im_info.");
int out_h = iter_out->second[0];
Expand All @@ -145,7 +183,17 @@ bool PPMatting::Postprocess(
// TODO: 需要修改成FDTensor或Mat的运算 现在依赖cv::Mat
float* alpha_ptr = static_cast<float*>(alpha_tensor.Data());
cv::Mat alpha_zero_copy_ref(out_h, out_w, CV_32FC1, alpha_ptr);
Mat alpha_resized(alpha_zero_copy_ref); // ref-only, zero copy.
cv::Mat cropped_alpha;
if (resize_by_long != im_info.end()) {
int resize_h = resize_by_long->second[0];
int resize_w = resize_by_long->second[1];
alpha_zero_copy_ref(cv::Rect(0, 0, resize_w, resize_h))
.copyTo(cropped_alpha);
} else {
cropped_alpha = alpha_zero_copy_ref;
}
Mat alpha_resized(cropped_alpha); // ref-only, zero copy.

if ((out_h != ipt_h) || (out_w != ipt_w)) {
// already allocated a new continuous memory after resize.
// cv::resize(alpha_resized, alpha_resized, cv::Size(ipt_w, ipt_h));
Expand Down
2 changes: 1 addition & 1 deletion csrc/fastdeploy/vision/matting/ppmatting/ppmatting.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class FASTDEPLOY_DECL PPMatting : public FastDeployModel {
const RuntimeOption& custom_option = RuntimeOption(),
const Frontend& model_format = Frontend::PADDLE);

std::string ModelName() const { return "PaddleMat"; }
std::string ModelName() const { return "PaddleMatting"; }

virtual bool Predict(cv::Mat* im, MattingResult* result);

Expand Down
1 change: 1 addition & 0 deletions examples/vision/matting/modnet/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image mattin
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186851964-4c9086b9-3490-4fcb-82f9-2106c63aa4f3.jpg">
</div>

## MODNet Python接口

```python
Expand Down
7 changes: 3 additions & 4 deletions examples/vision/matting/ppmatting/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,18 @@

在部署前,需要先将PPMatting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)

注意:在导出模型时不要进行NMS的去除操作,正常导出即可。

## 下载预训练模型

为了方便开发者的测试,下面提供了PPMatting导出的各系列模型,开发者可直接下载使用。

其中精度指标来源于PPMatting中对各模型的介绍,详情各参考PPMatting中的说明。
其中精度指标来源于PPMatting中对各模型的介绍(未提供精度数据),详情各参考PPMatting中的说明。


| 模型 | 参数大小 | 精度 | 备注 |
|:---------------------------------------------------------------- |:----- |:----- | :------ |
| [PPMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 87MB | - |
| [PPMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 87MB | - |
| [PPMatting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 87MB | - |
| [PPMatting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 87MB | - |



Expand Down
2 changes: 1 addition & 1 deletion examples/vision/matting/ppmatting/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg

# CPU推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
# GPU推理 (TODO: ORT-GPU 推理会报错)
# GPU推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
# GPU上TensorRT推理
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
Expand Down
4 changes: 3 additions & 1 deletion examples/vision/matting/ppmatting/cpp/infer.cc
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@ void CpuInfer(const std::string& model_dir, const std::string& image_file,
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "deploy.yaml";
auto option = fastdeploy::RuntimeOption();
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
config_file);
config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
Expand Down Expand Up @@ -58,6 +59,7 @@ void GpuInfer(const std::string& model_dir, const std::string& image_file,

auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UsePaddleBackend();
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
config_file, option);
if (!model.Initialized()) {
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/matting/ppmatting/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
# CPU推理
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
# GPU推理 (TODO: ORT-GPU 推理会报错)
# GPU推理
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
Expand Down
3 changes: 2 additions & 1 deletion examples/vision/matting/ppmatting/python/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,14 @@ def parse_arguments():

def build_option(args):
option = fd.RuntimeOption()

if args.device.lower() == "gpu":
option.use_gpu()
option.use_paddle_backend()

if args.use_trt:
option.use_trt_backend()
option.set_trt_input_shape("img", [1, 3, 512, 512])

return option


Expand Down
1 change: 1 addition & 0 deletions external/paddle_inference.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ else()
set(OMP_LIB "${PADDLEINFERENCE_INSTALL_DIR}/third_party/install/mklml/lib/libiomp5.so")
endif(WIN32)


set(PADDLEINFERENCE_URL_BASE "https://bj.bcebos.com/fastdeploy/third_libs/")
set(PADDLEINFERENCE_VERSION "2.4-dev")
if(WIN32)
Expand Down

0 comments on commit 7e00c5f

Please sign in to comment.