Closed as not planned
Closed as not planned
Description
Bug Description
When compiling the transformer-xl model using Torch-TensorRT, the following error is encountered:
RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:183] Expected ivalues_maps.count(input) to be true but got false
Could not find torch::jit::Value* 1822 produced from %1822 : Tensor = aten::mul(%attn_score.2, %self.layers.0.dec_attn.scale)
To Reproduce
Steps to reproduce the behavior:
- Run torch_tensorrt.compile with transformer-xl model as input, using fp32 precision.
- Choose fixed input sizes and enable truncate_long_and_double with 12 GB workspace.
Expected behavior
Model should successfully compile to Torch-TRT. Specifically, this torch::jit::Value*
error should not arise during compilation.
Environment
- Torch-TensorRT Version (e.g. 1.0.0): 09cd47a
- PyTorch Version (e.g. 1.0):
2.1.0.dev20230314+cu117