Skip to content

[Bug] [Relay] attribute track_running_stats of InstanceNorm lead to wrong inference results #14926

@jikechao

Description

@jikechao

For the layer InstanceNorm1d or InstanceNorm3d, if attribute track_running_stats was set as True, TVM will give different inference results with PyTorch.

Expected behavior

For the same input data, TVM and PyTorch give the same inference results.

Actual behavior

image

Steps to reproduce

import torch
from tvm import relay
import tvm
import numpy as np

m = torch.nn.InstanceNorm1d(3, track_running_stats=True).eval()
input_data = [torch.randn([1, 3, 5], dtype=torch.float32)]

torch_outputs = m(*[input_.clone() for input_ in input_data ])
trace = torch.jit.trace(m, input_data)
input_shapes = [('input0', torch.Size([1,3,5]))]

mod, params = relay.frontend.from_pytorch(trace, input_shapes)

with tvm.transform.PassContext(opt_level=3):
    exe = relay.create_executor('aot', mod=mod, params=params, device=tvm.cpu(0), target='llvm').evaluate()
input_tvm = {'input0': np.array(input_data[0], dtype='float32')}
tvm_outputs = exe(**input_tvm).asnumpy()

np.testing.assert_allclose(torch_outputs, tvm_outputs, rtol=1e-3, atol=1e-3)

Triage

  • frontend:torch
  • needs-triage

cc @shingjan

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions