Model | Detect Size | Speed (FPS) | F1 Score (%) | Pytorch Model | TensorRT Engine |
---|---|---|---|---|---|
YOLOv5m (baseline) | 288x192 | 20.0 | 81.34 | Download passwd: duap | Download |
SE-YOLOv5m+NST+ROI+SSVM (CDNet) | 288X192 | 33.1 | 94.72 | Download passwd: 1cpp | Download |
Notes
Detection speed measured on Jetson Nano (4GB) with JetPack-4.4, cuda-10.2 and TensorRT-7.1.3.
We also tested the GPU inference time on RTX 3080 using TensorRT, and the result shows that the speed is about 323 FPS (3.1 ms) per image.
Note that TensorRT is optimized for specific TensorRT version, CUDA version and hardware, so the engine downloaded from this page can only be used on Jetson Nano with specific package installed.
If you want to export the TensorRT engine from the pytorch weights for specific hardware and package, please refers to here.