Skip to content

🐛 [Bug] Could not find torch::jit::Value* error encountered during compilation #1815

Closed as not planned
@gs-olive

Description

@gs-olive

Bug Description

When compiling the transformer-xl model using Torch-TensorRT, the following error is encountered:

RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:183] Expected ivalues_maps.count(input) to be true but got false
    Could not find torch::jit::Value* 1822 produced from %1822 : Tensor = aten::mul(%attn_score.2, %self.layers.0.dec_attn.scale)

To Reproduce

Steps to reproduce the behavior:

  1. Run torch_tensorrt.compile with transformer-xl model as input, using fp32 precision.
  2. Choose fixed input sizes and enable truncate_long_and_double with 12 GB workspace.

Expected behavior

Model should successfully compile to Torch-TRT. Specifically, this torch::jit::Value* error should not arise during compilation.

Environment

  • Torch-TensorRT Version (e.g. 1.0.0): 09cd47a
  • PyTorch Version (e.g. 1.0): 2.1.0.dev20230314+cu117

Metadata

Metadata

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions