-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when converting the LaneATT pt model to onnx model #100
Comments
@sjtuljw520 I will check if this is supported, but perhaps tomorrow, I don't have the test environment now. |
@sjtuljw520 The development of LaneATT seems to have deviated from the main branch for a long time (quote @cedricgsh), so it was untested on conversions. I will mark this as a feature request for now. Theoretically speaking, LaneATT should support conversions since there are no special ops. |
Many thanks for replying. Looking forward to this feature can be added. @voldemortX |
@sjtuljw520 if you download anchors from LaneATT repo and use opset 11, torch 1.8 (and corresponding mmcv), the model can be converted to onnx. But not trt yet. I'm working on that. |
So if the NMS op is included in the model, the conversion can not work and will enconter error? @voldemortX |
Yes. It is a customized cuda kernel, which is not supported in the pytorch onnx convertor. You would need a customized onnx implementation for it. This kind of support is sophisticated and haven't been introduced into this framework (I currently don't know how to do it). And there is no reference for line nms onnx conversion out there (that I am aware of). |
But it looks like an op that is suitable to be included in customized post-processing impl, such as a customized SDK function or something. |
Thank you! I got it. @voldemortX |
@sjtuljw520 With 4419bba & #102, you should be able to convert LaneATT to ONNX and TensorRT (check the new doc, you will need pytorch 1.8.0 & nvidia-tensorrt 8.4.1.5). Except for the nms post-processing as discussed earlier. |
Nice work! 👍 @voldemortX |
Really appreciate the amazing work, thanks!
After downloaded culane_anchors_freq.pt from https://github.com/lucastabelini/LaneATT/raw/main/data/culane_anchors_freq.pt, i got the error above. using pytorch==1.13 onnx==1.12.0 onnxruntime==1.13.1. |
@YoohJH Are you also trying to convert LaneATT to onnx? Maybe check the input image height & width first, do they correspond with the CULane 288x800 setting. |
Thank you for the answer! The error happend when I try to convert resnet18_laneatt_culane_20220320.pt to .onnx. The command I used:
|
Thanks for the info, I will try to reproduce this error later today. |
@YoohJH Can't get to my machine right now, could you try |
It has been determined that .onnx can be exported without any error using:
Maybe the 'resnet18_laneatt_culane_20220320.pt' was named wrong? But there's no laneatt-tusimple in the model_zoo. |
It seems the default setting for laneatt is 360p in all datasets... |
My mistake! Thanks for the patience! |
When I run this command: python tools/to_onnx.py --config=configs/lane_detection/laneatt/resnet34_culane.py --height=360 --width=640 --checkpoint=model/resnet34_laneatt_culane_20220225.pt
Got the error message like this:
Traceback (most recent call last):
File "/home/liujianwei/project/code/pytorch-auto-drive-new/tools/to_onnx.py", line 70, in
pt_to_onnx(net, dummy, onnx_filename, opset_version=op_v)
File "/home/liujianwei/project/code/pytorch-auto-drive-new/utils/onnx_utils.py", line 55, in pt_to_onnx
torch.onnx.export(net, dummy, filename, verbose=True, input_names=['input1'], output_names=temp.keys(),
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/init.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 497, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 216, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/init.py", line 373, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 1032, in _run_symbolic_function
return symbolic_fn(g, *inputs, **attrs)
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/symbolic_opset9.py", line 483, in expand_as
return g.op("Expand", self, shape)
File "/home/liujianwei/.conda/envs/py39/lib/python3.9/site-packages/torch/onnx/utils.py", line 928, in _graph_op
torch._C._jit_pass_onnx_node_shape_type_inference(n, _params_dict, opset_version)
RuntimeError: input_shape_value == reshape_value || input_shape_value == 1 || reshape_value == 1INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/onnx/shape_type_inference.cpp":547, please report a bug to PyTorch. ONNX Expand input shape constraint not satisfied.
The text was updated successfully, but these errors were encountered: