- Fast Tesla-AI Style BEV Model + Visualizer. Built with TensorRT and Unity.
- Note: This uses strictly cameras. No lidar needed; Lidar is used for training, not inference.
- Also this no where close to being complete atm.
- Most requirements can be found on the original repo for CUDA-FastBev. However, some information is left out.
- TensorRT v8.5.1.7
- Cudnn v8.2
- Cuda v11.8
- libprotobuf-dev v3.6.1
- python3.10 (optional)
- Lidar Solution
- Extras (mp4s, models)
- I'm running the model on a WSL2 container running Ubuntu 22.04
- My host machine is running Windows11 24H2
- The requirements I put is based off what I have installed (on wsl2) precisely.
- Follow through with the original repo first but use the modified
.shfiles in this repo rather than cuda-fastbev (if you have problems) - Set your proper Compute Capability at the bottom of
tool/environment.shatln:62 ; export CUDASM="89"(change 89 to ur gpu) - Make sure you put both the
librariesanddependenciesfolders (from Lidar Solution) inside the BEV-Visualizer folder. - do
mkdir codein ur user folder and drag in tensorrt tar.gz (not deb) + extract - Follow instructions for cuda installation on nvidia's website
- Download cudnn, unzip, and drag its contents into your cuda folder
- Your workspace should look like this:
# $HOME/Bev-Visualizer nextrix@john:~/BEV-Visualizer$ ls CMakeLists.txt README.md dependencies example-data libraries model src tool # $HOME/code/TensorRT-8.5.1.7 nextrix@john:~/code$ ls TensorRT-8.5.1.7 # Merged Cudnn with Cuda nextrix@john:/usr/local/cuda$ ls DOCS README compute-sanitizer extras include libnvvp nvml share targets version.json EULA.txt bin doc gds lib64 nsightee_plugins nvvm src tools - Run
bash tool/build_trt_engine.shand wait for it to build both paths - After their both built, run
bash tool/run.shto start inferencing
# nuScenes
nextrix@john:~/BEV-Visualizer/example-data$ ls
0-FRONT.jpg 2-FRONT_LEFT.jpg 4-BACK_LEFT.jpg Videos example-data.pth valid_c_idx.tensor y.tensor
1-FRONT_RIGHT.jpg 3-BACK.jpg 5-BACK_RIGHT.jpg anchors.tensor images.tensor x.tensor
nextrix@john:~/BEV-Visualizer/example-data$ ls Videos/
CAM_BACK.mp4 CAM_BACK_LEFT.mp4 CAM_BACK_RIGHT.mp4 CAM_FRONT.mp4 CAM_FRONT_LEFT.mp4 CAM_FRONT_RIGHT.mp4
# model
nextrix@john:~/BEV-Visualizer/model$ ls
resnet18 resnet18int8 resnet18int8head
nextrix@john:~/CUDA-FastBEV/model$ ls resnet18int8
fastbev_post_trt_decode.onnx fastbev_pre_trt.onnx fastbev_ptq.pth build
nextrix@john:~/CUDA-FastBEV/model$ ls resnet18int8/build/
fastbev_post_trt_decode.json fastbev_post_trt_decode.plan fastbev_pre_trt.log
fastbev_post_trt_decode.log fastbev_pre_trt.json fastbev_pre_trt.plan
# Libs and Deps
nextrix@john:~/BEV-Visualizer/libraries$ ls
3DSparseConvolution cuOSD spconv
nextrix@john:~/CUDA-FastBEV/dependencies$ ls
dlpack stb
# Tools
nextrix@john:~/BEV-Visualizer/tool$ ls
build_trt_engine.sh environment.sh run.sh
=== Inference Statistics ===
Total frames processed: 404
Average inference time: 10.374 ms
Min inference time: 8.4918 ms
Max inference time: 171.994 ms
Average FPS: 96.3948
- General optimizations like how content is loaded
- Optimize memory management
- Use cheaper methods for simular quality
- Deploy on a 2022 Tesla Model 3 LR w/ nVidia Orin