Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error, while converting fastrcnn torchvision model #15

Open
dimabendera opened this issue Feb 8, 2021 · 2 comments
Open

Error, while converting fastrcnn torchvision model #15

dimabendera opened this issue Feb 8, 2021 · 2 comments

Comments

@dimabendera
Copy link

dimabendera commented Feb 8, 2021

Used docker

docker run --gpus all -it  nvcr.io/nvidia/pytorch:20.12-py3

Convertion scipt:

import torchvision
import torch
from torch2trt_dynamic import torch2trt_dynamic as torch2trt

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
model.cuda().eval().half()

data = torch.randn((1, 3, 800, 800)).cuda().half()

model_trt = torch2trt(model, [data])

error:

Traceback (most recent call last):
  File "./scripts/export_torch_to_tensorrt_example.py", line 24, in <module>
    model_trt = torch2trt(model,
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 518, in torch2trt_dynamic
    outputs = module(*inputs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py", line 99, in forward
    proposals, proposal_losses = self.rpn(images, features, targets)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/rpn.py", line 332, in forward
    anchors = self.anchor_generator(images, features)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 154, in forward
    anchors_over_all_feature_maps = self.cached_grid_anchors(grid_sizes, strides)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 142, in cached_grid_anchors
    anchors = self.grid_anchors(grid_sizes, strides)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 118, in grid_anchors
    shifts_x = torch.arange(
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 312, in wrapper
    converter['converter'](ctx)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/converters/mul.py", line 12, in convert_mul
    input_a_trt, input_b_trt = trt_(ctx.network, input_a, input_b)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 139, in trt_
    dtype = check_torch_dtype(*tensors)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 113, in check_torch_dtype
    assert (dtype == torch.int32)  # , 'Tensor data types must match')
AssertionError
@dimabendera
Copy link
Author

I think this is due to the fact that tensors with different data types are fed to the mul.
If we convert to one data type, then at the next stage we get an error:

[TensorRT] ERROR: (Unnamed Layer* 3651) [Slice]: slice size cannot have negative dimension, size = [-1]
[TensorRT] ERROR: (Unnamed Layer* 3651) [Slice]: slice size cannot have negative dimension, size = [-1]
Traceback (most recent call last):
  File "./scripts/export_torch_to_tensorrt.py", line 53, in <module>
    model_trt = torch2trt(model_w,
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 518, in torch2trt_dynamic
    outputs = module(*inputs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "./scripts/export_torch_to_tensorrt.py", line 41, in forward
    return self.model(x)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py", line 99, in forward
    proposals, proposal_losses = self.rpn(images, features, targets)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/rpn.py", line 338, in forward
    concat_box_prediction_layers(objectness, pred_bbox_deltas)
  File "/opt/conda/lib/python3.8/site-packages/torchvision/models/detection/rpn.py", line 100, in concat_box_prediction_layers
    box_regression = torch.cat(box_regression_flattened, dim=1).reshape(-1, 4)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 312, in wrapper
    converter['converter'](ctx)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/converters/view.py", line 15, in convert_view
    input_trt = trt_(ctx.network, input)
  File "/var/www/auto-carpart-detector/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 149, in trt_
    num_dim = len(t._trt.shape)
ValueError: __len__() should return >= 0

@grimoire
Copy link
Owner

grimoire commented Feb 8, 2021

Hi,
Please don't feed fp16 model and input to the convertor, if you want to enable fp16 model, set fp16_mode=True is enough.
And ... honestly, I am not sure if all the layers inside fastrcnn in torchvision can be converted in this repo. I will try to add the support, this might take some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants