Closed
Description
aten.unsqueeze, aten.reshape, aten.permute, aten.transpose
-
Function Schema:
torch.ops.aten.arange.start: ((), {})
torch.ops.aten.rsub.Scalar: ((torch.float32,), {})
torch.ops.aten._to_copy.default: ((torch.int32,), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int64), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int32), {})
torch.ops.aten.layer_norm.default: ((torch.float32, None, torch.float32, torch.float32), {})
torch.ops.aten.addmm.default: ((torch.float32, torch.float32, torch.float32), {})
torch.ops.aten._softmax.default: ((torch.float32,), {})
torch.ops.aten.where.self: ((torch.bool, torch.float32, torch.float32), {})
-
Original PyTorch API:
torch.arange
,torch.embedding
,torch.layer_norm
,torch.addmm
,torch._softmax
,torch.where
-
Relevant TensorRT Documentation: IElementWiseLayer, IConstantLayer
Add support for the above function schemas as aten converters.