English | 中文简体
a simple, efficient, easy-to-use nvidia TensorRT wrapper for cnn with c++ and python api,support caffe, uff and onnx format models. you will be able use tiny-tensorrt deploy your model with few lines of code!
// create engine
trt.CreateEngine(onnxModelpath, engineFile, customOutput, maxBatchSize, mode, calibratorData);
// transfer you input data to tensorrt engine
trt.DataTransfer(input,0,True);
// inference!!!
trt.Forward();
// retrieve network output
trt.DataTransfer(output, outputIndex, False) // you can get outputIndex in CreateEngine phase
- Support TensorRT 7 now --- 2019-12-25 🎄🎄🎄
- Custom plugin tutorial and well_commented sample! ---2019-12-11 🔥🔥🔥
- Custom onnx model output node ---2019.10.18
- Upgrade with TensorRT 6.0.1.5 --- 2019.9.29
- Support onnx,caffe and tensorflow model
- Support more model and layer --working on
- PReLU and up-sample plugin
- Engine serialization and deserialization
- INT8 support for caffe model
- Python api support
- Set device
cuda 10.0+
TensorRT 6 or 7
for python api, python 2.x/3.x and numpy in needed
Make sure you had install dependencies list above, if you are familiar with docker, you can use official docker
# clone project and submodule
git clone --recurse-submodules -j8 https://github.com/zerollzeng/tiny-tensorrt.git
cd tiny-tensorrt
mkdir build && cd build && cmake .. && make
then you can intergrate it into your own project with libtinytrt.so and Trt.h, for python module, you get pytrt.so
Custom Plugin Tutorial (En-Ch)
if you want some examples with tiny-tensorrt, you can refer to tensorrt-zoo
- upsample with custom scale, under test with yolov3.
- yolo-det, last layer of yolov3 which sum three scales output and generate final result for nms. under test with yolov3.
- PRELU, under test with openpose and mtcnn.
For the 3rd-party module and TensorRT, maybe you need to follow their license
For the part I wrote, you can do anything you want