Closed
Description
Bug Description
The aten.mean.dim
converter throws the following error when compiling the displayed model:
##### MODEL:
class Sample(torch.nn.Module):
def __init__(self):
super(Sample, self).__init__()
def forward(self, x):
return torch.mean(x, dim=1)
##### ERROR:
File "~/TensorRT/py/torch_tensorrt/fx/fx2trt.py", line 328, in call_function
return converter(self.network, target, args, kwargs, self._cur_node_name)
File "~/TensorRT/py/torch_tensorrt/fx/converters/aten_ops_converters.py", line 57, in aten_ops_adaptive_avg_poolnd
raise RuntimeError(f"We do not support {target} has dim={args[1]}")
RuntimeError: We do not support aten.mean.dim has dim=[1]
To Reproduce
Steps to reproduce the behavior:
- Initialize model as above:
Sample().eval().cuda()
- Initialize oneinput tensors, for example:
torch.zeros((5, 5), dtype=torch.float, device="cuda:0")
- Compile the model using FX:
torch_tensorrt.fx.compile(model, [input_], min_acc_module_size=1, is_aten=True)
Expected behavior
Model should compile via the FX path or list the operator as unsupported.
Environment
- Transformers: 4.26.1
- Torch-TensorRT Version (e.g. 1.0.0): fce0a01
- PyTorch Version (e.g. 1.0): 2.1.0.dev20230313+cu117
- CPU Architecture: Intel Xeon CPU
- OS: Ubuntu 20.04
- How you installed PyTorch: pip
- Build command you used:
python setup.py develop
- Are you using local sources or building from archives: local
- Python version: 3.8.13
- CUDA version: 11.7
Additional Context
Solving this issue will also resolve the error encountered in #1740