Skip to content

🐛 [Bug] RuntimeError: [Error thrown at core/conversion/conversion.cpp:220] List type. Only a single tensor or a TensorList type is supported. #1170

Closed
@wanduoz

Description

@wanduoz

Bug Description

I use torch_tensorrt to convert a multi-head classification model. I find a similar issue here https://github.com/pytorch/TensorRT/issues/899, but can't find a clear solution or workaround. Any suggestions or conclusions?

Traceback (most recent call last):
  File "pt2torch_tensorrt.py", line 45, in <module>
    trt_model_fp32 = torch_tensorrt.compile(model, inputs=[torch_tensorrt.Input((2, 3, 512, 512), dtype=torch.float32)],enabled_precisions = torch.float32)
  File "/opt/conda/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 115, in compile
    return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py", line 116, in compile
    compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError: [Error thrown at core/conversion/conversion.cpp:220] List type. Only a single tensor or a TensorList type is supported

To Reproduce

Code snippets are as followed

def forward(self, x):
    x = self.features(x)
    x = self.avgpool(x)
    x = torch.flatten(x, 1)
    x = [classifier(x) for classifier in self.classifiers]
    return x

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

use prebuild container, v22.03

  • CUDA version: 11.6
  • GPU models and configuration: 1080ti

Additional context

I try return the first element of list output and gets a trt model successfully.

def forward(self, x):
    x = self.features(x)
    x = self.avgpool(x)
    x = torch.flatten(x, 1)
    x = [classifier(x) for classifier in self.classifiers]
    return x[0]
root@b41c3016cf4c:/models/1-inference-investigation/mycode/aimachine-master# python pt2torch_tensorrt.py
results/pth2onnx-20220621
training in device = cuda
Creating model with parameters: {'heads_type': 'mult', 'input_dim': 3, 'model_dir': 'results', 'net_name': 'efficientnet_multiheads_b5', 'num_classes_list': [1, 1, 11, 53]}
loading model: model_36_20220621_v2.pth
graph(%self_1 : __torch__.aimachine.models.classification.classificationnet.ClassificationNet_trt,
      %input_0 : Tensor):
  %__torch___aimachine_models_classification_classificationnet_ClassificationNet_trt_engine_ : __torch__.torch.classes.tensorrt.Engine = prim::GetAttr[name="__torch___aimachine_models_classification_classificationnet_ClassificationNet_trt_engine_"](%self_1)
  %3 : Tensor[] = prim::ListConstruct(%input_0)
  %4 : Tensor[] = tensorrt::execute_engine(%3, %__torch___aimachine_models_classification_classificationnet_ClassificationNet_trt_engine_)
  %5 : Tensor = prim::ListUnpack(%4)
  return (%5)
root@b41c3016cf4c:/models/1-inference-investigation/mycode/aimachine-master#

Metadata

Metadata

Labels

bugSomething isn't workingcomponent: coreIssues re: The core compiler

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions