Skip to content

↔ [Converter] Add support for assorted operators in the FX aten path #1769

Closed
@gs-olive

Description

@gs-olive

aten.unsqueeze, aten.reshape, aten.permute, aten.transpose

  • Function Schema:

    • torch.ops.aten.arange.start: ((), {})
    • torch.ops.aten.rsub.Scalar: ((torch.float32,), {})
    • torch.ops.aten._to_copy.default: ((torch.int32,), {})
    • torch.ops.aten.embedding.default: ((torch.float32, torch.int64), {})
    • torch.ops.aten.embedding.default: ((torch.float32, torch.int32), {})
    • torch.ops.aten.layer_norm.default: ((torch.float32, None, torch.float32, torch.float32), {})
    • torch.ops.aten.addmm.default: ((torch.float32, torch.float32, torch.float32), {})
    • torch.ops.aten._softmax.default: ((torch.float32,), {})
    • torch.ops.aten.where.self: ((torch.bool, torch.float32, torch.float32), {})
  • Original PyTorch API: torch.arange, torch.embedding, torch.layer_norm, torch.addmm, torch._softmax, torch.where

  • Relevant TensorRT Documentation: IElementWiseLayer, IConstantLayer

Add support for the above function schemas as aten converters.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions