Skip to content

Choise-ieee/DEIMv2_CPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

DEIMv2_CPP

[DEIMv2] Real Time Object Detection Meets DINOv3 C++ and ONNX version

DEIMv2(https://github.com/Intellindust-AI-Lab/DEIMv2) is an evolution of the DEIM framework while leveraging the rich features from DINOv3. Our method is designed with various model sizes, from an ultra-light version up to S, M, L, and X, to be adaptable for a wide range of scenarios. Across these variants, DEIMv2 achieves state-of-the-art performance, with the S-sized model notably surpassing 50 AP on the challenging COCO benchmark.

We completed it from originally python onnx to windows onnx. image

Roadmap

Complete the adaptation on the Huawei-Ascend 310B and 910C platform(progress described at the final chapter 'To do next' part)

Steps

  1. according to the guide of Setup of DEIMv2:
conda create -n deimv2 python=3.11 -y
conda activate deimv2
pip install -r requirements.txt
  1. Then deploy the model
pip install onnx onnxsim
python tools/deployment/export_onnx.py --check -c configs/deimv2/deimv2_dinov3_${model}_coco.yml -r model.pth
  1. The model can be download at the DEIMv2 website and dinov3 website as showed in the corresponding filepath.we use the middle-coco-model for showing.
image image
  1. Export onnx
pip install opencv-python
pip install onnxruntime(or GPU version), 
python tools/deployment/export_onnx.py --check -c configs/deimv2/deimv2_dinov3_m_coco.yml -r deimv2_dinov3_m_coco.pth
image
  1. python onnx evacuation We can demo in python onnx version:
python tools/inference/onnx_inf.py --onnx deimv2_dinov3_m_coco.onnx --input image.jpg
image
  1. CPP onnx evacuation Use the reference coding above, we use VS2019 and onnx-1.181. The CPU version running at the intel Ultra9-185H onnx_result
image
  1. CPP GPU-accererate onnx evacuation Enable the two lines for ONNX cuda running.
image

The GPU version running at GTX1060 and intel-I9-13900KF. image

PS.

  1. Due to the file size limited in Github, the onnx can download in this google share:https://drive.google.com/file/d/1nPKDHrotusQ748O1cQXJfi5wdShq6bKp/view?usp=drive_link
  2. Step 4 cost much system memory, you can config more virtual memory in windows system to cover it
image

To do next

正在适配国产昇腾GPU卡,目前代码和onnx到om模型的转换已经完成,在昇腾推理卡310B4上运行有问题,正在与华为沟通排查问题

Adapting to the domestic Ascend GPU cards, the code and ONNX-to-OM model conversion have been completed. However, there are issues running on the Ascend inference card 310B4. We are currently communicating with Huawei to troubleshoot the problem

ATC转换命令按照910C和310C的为

The ATC conversion commands follow the standards of 910C and 310C

atc --model=deimv2_dinov3_m_coco.onnx \
--framework=5 \
--output=ascend_model\
--soc_version=Ascend310B4 \
--input_shape="images:1,3,640,640;orig_target_sizes:1,2" \
--input_format=NCHW \
--insert_op_conf=aipp.cfg \
--optypelist_for_implmode="Abs" \
--op_select_implmode=high_precision \
--precision_mode=force_fp16 \
--disable_reuse_memory=1 \
--output_type=FP16

但是运行有问题,正在与华为团队沟通

However, there is an operational issue, and we are currently communicating with the Huawei team

output

Thanks

Our work is built upon DEIMv2 and DINOv3. Thanks for their great work!

About

[DEIMv2] Real Time Object Detection Meets DINOv3 C++ and ONNX version

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages