[Bug Report] Adding broadcasted tensors gives wrong results #16352
Open
Description
opened on Dec 30, 2024
Describe the bug
Adding a (1, 1) tensor to a (32, 32) tensor with a large value (still very well within the data format's range) gives incorrect results for float32.
It appears to work for bf16 or if both tensors are the same shape, or if the value is relatively small.
To Reproduce
Run the following code:
def test_add_1m(device):
torch.manual_seed(0)
a = torch.ones(1, 1) * 1_000_000
b = torch.ones(32, 32)
c = a + b
ta = ttnn.from_torch(a, device=device, layout=ttnn.TILE_LAYOUT)
tb = ttnn.from_torch(b, device=device, layout=ttnn.TILE_LAYOUT)
tc = ttnn.add(ta, tb)
assert torch.allclose(c, ttnn.to_torch(tc)), f"{c} != {ttnn.to_torch(tc)}"
> assert torch.allclose(c, ttnn.to_torch(tc)), f"{c} != {ttnn.to_torch(tc)}"
E AssertionError: tensor([[1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.],
E [1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.],
E [1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.],
E ...,
E [1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.],
E [1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.],
E [1000001., 1000001., 1000001., ..., 1000001., 1000001., 1000001.]]) != TorchTensor([[131008., 131008., 131008., ..., 131008., 131008., 131008.],
E [131008., 131008., 131008., ..., 131008., 131008., 131008.],
E [131008., 131008., 131008., ..., 131008., 131008., 131008.],
E ...,
E [131008., 131008., 131008., ..., 131008., 131008., 131008.],
E [131008., 131008., 131008., ..., 131008., 131008., 131008.],
E [131008., 131008., 131008., ..., 131008., 131008., 131008.]])
Environment information:
- OS: Ubuntu 22.04
- Version of software: 6b3b9ec
Activity