Skip to content

Reductions reducing with padding on TILE_LAYOUT inputs. #16272

Open
@LPanosTT

Description

The following test fails:

def test_broken_reduction(device):

    shape = (1, 10)

    torch_input = torch.zeros(shape, dtype=torch.bfloat16)

    golden = torch.exp(torch_input).sum(dim=1, keepdim=True)

    tt_input = ttnn.from_torch(torch_input, device=device)
    tt_input = ttnn.to_layout(tt_input, ttnn.TILE_LAYOUT)

    tt_exp = ttnn.exp(tt_input) # Padded values become 1
    tt_sum = ttnn.sum(tt_exp, dim=1, keepdim=True) # reduction adds padded values
    tt_out = ttnn.to_layout(tt_sum, ttnn.ROW_MAJOR_LAYOUT)
    tt_out = ttnn.from_device(tt_out)
    tt_out = ttnn.to_torch(tt_out) # result is [[32.0]] rather than [[10.0]]

    assert torch.allclose(tt_out, golden)

I have a 1x10 tensor of values (0 in this case) I need to exponentiate and then take their sum. Because ttnn.exp is an element-wise operation the input must be in TILE_LAYOUT. After executing the op the padded 0's introduced by ttnn.to_layout are assigned the value of 1.0. I then perform the reduction and those 1.0 values are included in the sum. Thus giving an incorrect result.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions