Skip to content

InstanceNorm always has training attribute set to True #1262

Closed
@yuanyao-nv

Description

@yuanyao-nv

I'm exporting a small ONNX model consisting of just a InstanceNorm2d op. The exported model contains a BatchNorm node with the training attribute set to True.

image

Here's my script:

class Model(nn.Module):
  def __init__(self):
      super().__init__()
      self.instancenorm = torch.nn.InstanceNorm2d(100)

  def forward(self, tensor_x: torch.Tensor):
      output = self.instancenorm(tensor_x)
      return output

def Dataloader():
    yield torch.randn(20, 100, 35, 45).cuda()

model = Model()
data = next(Dataloader())

export_output = torch.onnx.dynamo_export(
    model.eval().to('cuda'),
    data,
)
export_output.save('instancenorm_dynamo.onnx')

If I understand it correctly, the origin of this issue should be in the torch repo since onnxscript just takes whatever value is passed for the training input of aten_native_batch_norm()?

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions