-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
yolov5 to onnx model with nms #159
Comments
Actually in notebook https://github.com/zhiqwang/yolov5-rt-stack/blob/master/notebooks/export-onnx-inference-onnxruntime.ipynb, the exported ONNX model already contains the |
@zhiqwang class Warp_nms(torch.nn.Module):
def __init__(self, score_thresh, nms_thresh, detection_per_img):
super().__init__()
self.score_thresh = score_thresh
self.nms_thresh = nms_thresh
self.detection_per_img = detection_per_img
def forward(self, dump_rois):
detections: List[Dict[str, torch.Tensor]] = []
xc = dump_rois[:, 4] > self.score_thresh
x = dump_rois[xc]
x[:, 5:] *= x[:, 4:5]
box = xywh2xyxy_torch(x[:, :4])
conf, j = x[:, 5:].max(1, keepdim=True)
x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > self.score_thresh]
_, index = x[:, 4].sort(descending=True)
x = x[index][:30000]
# Batched NMS
c = x[:, 5:6] * 4096
boxes, scores = x[:, :4] + c, x[:, 4]
i = torchvision.ops.nms(boxes, scores, self.nms_thresh) # NMS
detections.append({'dets': x[i[:self.detection_per_img]]})
return detections |
@zhiqwang |
BTW, The difference between |
@zhiqwang |
Dear @zhiqwang class Warp_nms(torch.nn.Module):
def __init__(self, score_thresh, nms_thresh, detection_per_img):
super().__init__()
self.score_thresh = score_thresh
self.nms_thresh = nms_thresh
self.detection_per_img = detection_per_img
def forward(self, dump_rois):
detections: List[Dict[str, torch.Tensor]] = []
xc = dump_rois[:, 4] > self.score_thresh
x = dump_rois[xc]
x[:, 5:] *= x[:, 4:5]
box = xywh2xyxy_torch(x[:, :4])
conf, j = x[:, 5:].max(1, keepdim=True)
#for batched_nms
boxes = box[conf.view(-1) > self.score_thresh]
classes = j.float()[conf.view(-1) > self.score_thresh]
scores = conf[[conf.view(-1) > self.score_thresh]]
_, index = scores.sort(descending=True)
boxes = boxes[index][:30000].view(-1, 4)
classes = classes[index][:30000].view(-1)
scores = scores[index][:30000].view(-1)
# Batched NMS
i = torchvision.ops.batched_nms(boxes, scores, classes, self.nms_thresh) # NMS
print('here is nms')
# output = x[i[:300]]
detections.append({'dets': x[i[:self.detection_per_img]]})
return detections during exporting progress, I met 2 warnings: For the second warning, I had used opset 12. After exporting successfully, The error still be the same as previous try. |
You can also check this notebook https://github.com/zhiqwang/yolov5-rt-stack/blob/master/notebooks/how-to-align-with-ultralytics-yolov5.ipynb as reference. And we're welcome for contributing to combine the |
@zhiqwang |
They represent the three dynamic outputs of |
Hello @zhiqwang |
@zhiqwang I know It's just a notification, but do you know how to get rid of this warning ? |
Hi @trungpham2606 , I guess that you can use the dynamic shape as something like below (the parameter torch.onnx.export(
model,
(images,),
export_onnx_name,
do_constant_folding=True,
opset_version=_onnx_opset_version,
dynamic_axes={"images_tensors": [0, 1, 2], "outputs": [0, 1, 2]},
input_names=["images_tensors"],
output_names=["scores", "labels", "boxes"],
)
BTW, the https://github.com/zhiqwang/yolov5-rt-stack/blob/cc2bd50978b7118ae1cb16918248d991d0b927e8/yolort/models/box_head.py#L6 |
@zhiqwang torch.onnx.export(
model,
hm,
f=ONXX_FILE_PATH,
input_names=['image1'],
output_names=['scores', 'classes', 'boxes', 'mask_features', 'kpts_features'],
verbose=False,
opset_version=11,
do_constant_folding=True,
# dynamic_axes= {"outputs": [0, 1, 2, 3, 4]},
dynamic_axes= {
'scores': {0: 'sequence'},
'classes': {0: 'sequence'},
'boxes': {0: 'sequence'},
'mask_features': {0: 'sequence'},
'kpts_features': {0: 'sequence'},
},
) The warnings disappeared then. |
Hi @trungpham2606 , Congratulations! |
Hi @zhiqwang I did see that the results from CPUExecutionProvider and CUDAExecutionProvider are different and the results from CPU execution are much more stable than the CUDA one. |
Seems that more information is needed to determine the reason for this problem. And to keep this thread clean, I think it's better to file a new discussion about this.
|
FYI, using the following snippet will export a dynamic batch/shape ONNX model containing YOLOv5 model and post-processing ( # 'yolov5s.pt' is downloaded from https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
python tools/export_model.py --checkpoint_path yolov5s.pt --skip_preprocess Check out the details in #193. I believe this can resolve this problem, and as such I'm closing this issue, feel free to create another ticket if you have more question. |
Hi @zhiqwang |
Hi @Deronjey , Now you can just follow the following tutorials |
@Deronjey , and we support version 3.1, 4.0 and 6.0 released by ultralytics/yolov5. Actually the version 5.0 models released by yolov5 is same with 4.0, so you can just set |
I used following commad to export the ONNX models, and I use the 5.0 tag of ultralytics/yolov5 to train the model.pt. It raises an AttributeError: conv object has no attribute weight. How can I do for this error? python tools/export_model.py --checkpoint_path model.pt --size_divisible 32 |
Hi @Deronjey, You can add arguments python3 tools/export_model.py --checkpoint_path model.pt --size_divisible 32 --version r4.0 |
it's done,Thank you three thousand times |
Hello @zhiqwang
I wonder that did you finish exporting the yolov5 with nms to onnx model yet ?
I dont see any PR in yolov5-ultralystic.
The text was updated successfully, but these errors were encountered: