Skip to content

Commit 845cf0d

Browse files
Enhance Engine Class Readability and Maintainability (cyrusbehr#72)
* lift and shift inline function to inl files * revert pathchange for input image * update readme to include new project structure
1 parent 5e2fd37 commit 845cf0d

File tree

13 files changed

+782
-716
lines changed

13 files changed

+782
-716
lines changed

CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ find_package(fmt REQUIRED)
2727
add_library(tensorrt_cpp_api SHARED
2828
src/engine.cpp)
2929

30-
target_include_directories(tensorrt_cpp_api PUBLIC ${OpenCV_INCLUDE_DIRS} ${CUDA_INCLUDE_DIRS} ${TensorRT_INCLUDE_DIRS})
30+
target_include_directories(tensorrt_cpp_api PUBLIC ${OpenCV_INCLUDE_DIRS} ${CUDA_INCLUDE_DIRS} ${TensorRT_INCLUDE_DIRS} include)
3131
target_link_libraries(tensorrt_cpp_api PUBLIC ${OpenCV_LIBS} ${CUDA_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT} ${TensorRT_LIBRARIES} fmt::fmt)
3232

3333
add_executable(run_inference_benchmark src/main.cpp)

README.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,29 @@ Benchmarks run on RTX 3050 Ti Laptop GPU, 11th Gen Intel(R) Core(TM) i9-11900H @
134134
Wondering how to integrate this library into your project? Or perhaps how to read the outputs of the YoloV8 model to extract meaningful information?
135135
If so, check out my two latest projects, [YOLOv8-TensorRT-CPP](https://github.com/cyrusbehr/YOLOv8-TensorRT-CPP) and [YOLOv9-TensorRT-CPP](https://github.com/cyrusbehr/YOLOv9-TensorRT-CPP), which demonstrate how to use the TensorRT C++ API to run YoloV8/9 inference (supports object detection, semantic segmentation, and body pose estimation). They make use of this project in the backend!
136136

137+
### Project Structure
138+
```sh
139+
project-root/
140+
├── include/
141+
│ ├── engine/
142+
│ │ ├── EngineRunInference.inl
143+
│ │ ├── EngineUtilities.inl
144+
│ │ └── EngineBuildLoadNetwork.inl
145+
│ ├── util/...
146+
│ ├── ...
147+
├── src/
148+
| ├── ...
149+
│ ├── engine.cpp
150+
│ ├── engine.h
151+
│ └── main.cpp
152+
├── CMakeLists.txt
153+
└── README.md
154+
```
155+
137156
### Understanding the Code
138-
- The bulk of the implementation is in `src/engine.cpp`. I have written lots of comments all throughout the code which should make it easy to understand what is going on.
157+
- The bulk of the implementation is located in `include/engine`. I have written lots of comments all throughout the code which should make it easy to understand what is going on.
158+
- The inference code is located in `include/engine/EngineRunInference.inl`.
159+
- The building and loading of the TensorRT engine file is located in `include/engine/EngineBuildLoadNetwork.inl`.
139160
- You can also check out my [deep-dive video](https://youtu.be/Z0n5aLmcRHQ) in which I explain every line of code.
140161

141162
### How to Debug

include/Int8Calibrator.h

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
#pragma once
2+
#include "NvInfer.h"
3+
4+
// Class used for int8 calibration
5+
class Int8EntropyCalibrator2 : public nvinfer1::IInt8EntropyCalibrator2 {
6+
public:
7+
Int8EntropyCalibrator2(int32_t batchSize, int32_t inputW, int32_t inputH, const std::string &calibDataDirPath,
8+
const std::string &calibTableName, const std::string &inputBlobName,
9+
const std::array<float, 3> &subVals = {0.f, 0.f, 0.f}, const std::array<float, 3> &divVals = {1.f, 1.f, 1.f},
10+
bool normalize = true, bool readCache = true);
11+
virtual ~Int8EntropyCalibrator2();
12+
// Abstract base class methods which must be implemented
13+
int32_t getBatchSize() const noexcept override;
14+
bool getBatch(void *bindings[], char const *names[], int32_t nbBindings) noexcept override;
15+
void const *readCalibrationCache(std::size_t &length) noexcept override;
16+
void writeCalibrationCache(void const *ptr, std::size_t length) noexcept override;
17+
18+
private:
19+
const int32_t m_batchSize;
20+
const int32_t m_inputW;
21+
const int32_t m_inputH;
22+
int32_t m_imgIdx;
23+
std::vector<std::string> m_imgPaths;
24+
size_t m_inputCount;
25+
const std::string m_calibTableName;
26+
const std::string m_inputBlobName;
27+
const std::array<float, 3> m_subVals;
28+
const std::array<float, 3> m_divVals;
29+
const bool m_normalize;
30+
const bool m_readCache;
31+
void *m_deviceInput;
32+
std::vector<char> m_calibCache;
33+
};

0 commit comments

Comments
 (0)