Skip to content

✨[Feature] + 🐛 [Bug] Allow ITensor biases in aten.convolution converters #1954

Closed
@gs-olive

Description

@gs-olive

Context

Currently, aten.convolution converters do not support ITensor biases, which can cause test failures in CI (example), as the new Dynamo compile path primarily uses ITensor objects for general tensors throughout computation.

# and bias being ITensor is not supported in TensorRT api
# right now
if kwargs["bias"] is not None and not isinstance(kwargs["bias"], torch.Tensor):
raise RuntimeError(
f"linear {name} has bias of type {type(kwargs['bias'])}, Expect Optional[Tensor]"
)
bias = to_numpy(kwargs["bias"]) # type: ignore[arg-type]

Proposed Solution

Allow ITensor biases for aten.convolution ops in the same way that the kernel weights can be ITensor objects. See IConvolutionLayer for further information on TensorRT convolution layers.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions