Closed
Description
Context
Currently, the implementation of torch.ops.aten.expand
, which is based on acc_ops.expand
, requires that the rank of the shape being expanded to is the same as the rank of the input tensor (see below). This differs from the behavior of Torch, which can handle expand
function calls to shapes of larger rank.
TensorRT/py/torch_tensorrt/fx/converters/acc_ops_converters.py
Lines 2576 to 2594 in de79be6
Valid Torch Behavior
import torch
x = torch.ones((64,))
x_new = x.expand((16, 1, 64))
Feature Proposal
Add functionality to the acc_ops_expand_tensor
to automatically pad the dimension of the tensor via existing broadcast/padding utilities, so the ranks of the input tensor and expand shape agree.
Consider using IPaddingLayer, as is done here:
Additional Context
Error encountered on model:
File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/fx/converters/aten_ops_converters.py", line 378, in aten_ops_expand
return acc_ops_converters.acc_ops_expand_tensor(
File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/fx/converters/acc_ops_converters.py", line 2559, in acc_ops_expand_tensor
assert len(shape) == ranks
AssertionError: While executing %expand : [#users=1] = call_function[target=torch.ops.aten.expand.default](args = (%arg3_1, [16, 1, 64]), kwargs = {_itensor_to_tensor_meta: {<tensorrt.tensorrt.ITensor object at 0x7f404a9e66b0>: ((64,), torch.float32, True, (1,), torch.contiguous_format, False, {})}})