-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Closed
Milestone
Description
Describe the bug
Cannot convert the endoscopic_tool_segmentation model to TensorRT by running the command below:
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision fp16
with error output:
/home/liubin/data/tmp/MONAI/monai/networks/utils.py:670: UserWarning: There is no dynamic batch range. The converted model only takes (1, 3, 736,
480) shape input.
warnings.warn(f"There is no dynamic batch range. The converted model only takes {input_shape} shape input.")
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/liubin/data/tmp/MONAI/monai/bundle/__main__.py", line 20, in <module>
fire.Fire()
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/liubin/data/tmp/MONAI/monai/bundle/scripts.py", line 1137, in trt_export
_export(
File "/home/liubin/data/tmp/MONAI/monai/bundle/scripts.py", line 907, in _export
net = converter(model=net, **kwargs)
File "/home/liubin/data/tmp/MONAI/monai/networks/utils.py", line 703, in convert_to_trt
trt_model = torch_tensorrt.compile(
File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_compile.py", line 125, in compile
return torch_tensorrt.ts.compile(
File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/ts/_compiler.py", line 136, in compile
compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:180] Expected ivalues_maps.count(input) to be true but got false
Could not find torch::jit::Value* 9351 produced from %9351 : Tensor = aten::_convolution(%x_0.2, %self.decoder.blocks.0.convs.conv_0.conv.weight.
3, %self.encoder._conv_stem.bias.33, %11, %11, %11, %9349, %9350, %self.encoder._blocks.0.0.expand_ratio.1, %9349, %9349, %9349, %9349) in loweri
ng graph for mini graph input.
To Reproduce
Steps to reproduce the behavior:
- Run
docker pull nvcr.io/nvidian/pytorch:23.03-py3to get the pytorch docker 23.03 image - Start a container with the pytorch 23.03 image
- Create a local folder your folder and go into this folder for test
- Run
git clone https://github.com/Project-MONAI/MONAI.git,cd MONAI;python setup.py developto install the latest MONAI - Run
pip install fireto install this optional requirement - Back to the your folder
- Download the endoscopic_tool_segmentation bundle by running
python -c "import monai; monai.bundle.download('endoscopic_tool_segmentation', version='0.4.2', bundle_dir='./')" - Run the command line
cd endoscopic_tool_segmentation; python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision fp16
Additional Information
In the UpCat.forward function of basic_unet, if change the type hint of x_e to Optional[torch.Tensor] and the conditional code torch.jit.isinstance(x_e, torch.Tensor) to x_e is None like the code in 1.1.0, this question will be fixed.
Metadata
Metadata
Assignees
Labels
No labels