Skip to content

Commit

Permalink
Update C++ interfaces of TorchVision to 0.9.0+ (#136)
Browse files Browse the repository at this point in the history
* Update C++ interfaces of TorchVision to 0.9.0+

* add bgr2rgb

* delete the debug line

* Update README.md

Co-authored-by: Zhiqiang Wang <zhiqwang@outlook.com>
  • Loading branch information
xiguadong and zhiqwang authored Jul 13, 2021
1 parent 82d6afb commit ec5464d
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 19 deletions.
14 changes: 7 additions & 7 deletions deployment/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# LibTorch Inference

A LibTorch inference implementation of yolov5. Both GPU and CPU are supported.
The LibTorch inference of yolort. Both GPU and CPU are supported.

## Dependencies

- Ubuntu 18.04
- CUDA 10.2
- LibTorch 1.7.0+
- TorchVision 0.8.1+
- LibTorch 1.8.0 / 1.9.0
- TorchVision 0.9.0 / 0.10.0
- OpenCV 3.4+
- CUDA 10.2 [Optional]

## Usage

Expand All @@ -24,16 +24,16 @@ A LibTorch inference implementation of yolov5. Both GPU and CPU are supported.
```bash
git clone https://github.com/pytorch/vision.git
cd vision
git checkout release/0.8.0 # replace to `nightly` branch instead if you are using the nightly version
git checkout release/0.9 # replace to `nightly` branch instead if you are using the nightly version
mkdir build && cd build
cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch
cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch # Set `-DWITH_CUDA=ON` if you're using GPU
make -j4
sudo make install
```

1. Generate `TorchScript` model

Unlike [ultralytics's](https://github.com/ultralytics/yolov5/blob/master/models/export.py) trace (`torch.jit.trace`) mechanism, I'm using `torch.jit.script` to jit trace the YOLO models which containing the whole pre-processing (especially using the `GeneralizedRCNNTransform` ops) and post-processing (especially with the `nms` ops) procedures, so you don't need to rewrite manually the cpp codes of pre-processing and post-processing.
Unlike [ultralytics's](https://github.com/ultralytics/yolov5/blob/master/models/export.py) trace (`torch.jit.trace`) mechanism, We're using `torch.jit.script` to trace the YOLOv5 models which containing the whole pre-processing (especially using the `letterbox` ops) and post-processing (especially with the `nms` ops) procedures, so you don't need to rewrite manually the C++ codes of pre-processing and post-processing.
```bash
git clone https://github.com/zhiqwang/yolov5-rt-stack.git
Expand Down
15 changes: 3 additions & 12 deletions deployment/src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@
#include <torch/torch.h>

#include <torchvision/vision.h>
#include <torchvision/ROIPool.h>
#include <torchvision/nms.h>
#include <torchvision/ops/nms.h>

std::vector<std::string> LoadNames(const std::string& path) {
// load class names
Expand All @@ -36,6 +35,7 @@ std::vector<std::string> LoadNames(const std::string& path) {
torch::Tensor ReadImage(const std::string& loc) {
// Read Image from the location of image
cv::Mat img = cv::imread(loc);
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
img.convertTo(img, CV_32FC3, 1.0f / 255.0f); // normalization 1/255

// Convert image to tensor
Expand Down Expand Up @@ -134,10 +134,6 @@ int main(int argc, const char* argv[]) {
std::string weights = opt["checkpoint"].as<std::string>();
module = torch::jit::load(weights);
module.to(device_type);
if (is_gpu) {
module.to(torch::kHalf);
}

module.eval();
std::cout << ">>> Model loaded" << std::endl;
} catch (const torch::Error& e) {
Expand All @@ -159,9 +155,7 @@ int main(int argc, const char* argv[]) {
// Run once to warm up
std::cout << ">>> Run once on empty image" << std::endl;
auto img_dumy = torch::rand({3, 416, 320}, options);
if (is_gpu) {
img_dumy = img_dumy.to(torch::kHalf);
}

images.push_back(img_dumy);
inputs.push_back(images);

Expand All @@ -176,9 +170,6 @@ int main(int argc, const char* argv[]) {
// Read image
auto img = ReadImage(image_path);
img = img.to(device_type);
if (is_gpu) {
img = img.to(torch::kHalf);
}

images.push_back(img);
inputs.push_back(images);
Expand Down

0 comments on commit ec5464d

Please sign in to comment.