Skip to content

Yui-Arthur/tensorRT-with-nano

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduce

This project is run the model on jetson nano with tensorRT

Lab flower & yolov8 folder contain two part , one is training model on colab , another is inference on nano

homework direction contain TODO Part and need complete by your self

nano requirement

install pycuda

sudo su
bash script/pycuda_install.sh

install onnxruntime

bash script/onnxruntime_install.sh

exprot onnx => tensorRT engine

onnx2trt.py can export your onnx model to tensorRT engine

# example
sudo python3 onnx2trt.py --onnx {model_path}

trtexec tool (official)

# export to onnx
/usr/src/tensorrt/bin/trtexec --onnx={onnx_model} --saveEngine={engine_path}
# check model performance
/usr/src/tensorrt/bin/trtexec --loadEngine={engine_path}  --warmUp=5000

Classify

Naki Teacher Lesson : Audio Classifiy

Object Detection


Yolov8 Inference Speed Benchmark on Nano with MAXN mode

ONNX RUNTIME CPU ONNX RUNTIME CUDA TensorRT FP32 TensorRT FP16
320*320 FPS 4.60 23.4 33.8 36.5
640*640 FPS 1.26 8.56 11.6 15.5

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors