Skip to content

[Linalg][Canonicalization] Add Tensor -> Scalar Pattern #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

chrsmcgrr
Copy link
Collaborator

There are many other patterns and checks within Linalg that define a
scalar to be a single value that is not a single element tensor type.

This blocks certain transforms (LinalgSpecialization) and checks.

Take for example the following:

module {
  func.func public @main() -> tensor<32x256xf32> {
    %cst = arith.constant dense<0.58510226> : tensor<1xf32>
    %0 = tensor.empty() : tensor<32x256xf32>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : tensor<1xf32>) outs(%0 : tensor<32x256xf32>) {
    ^bb0(%in: f32, %out: f32):
      linalg.yield %in : f32
    } -> tensor<32x256xf32>
    func.return %1 : tensor<32x256xf32>
  }
}

The generic in question performs a broadcast from a scalar to a tensor.
However, the transforms and checks that identifies this oepration as a
FillOp do not trigger because the definition of a scalar at the input
is one that is not a RankedTensor, rather a single value.

The following change attempts to reconcile this discrepancy by
converting any known operand on a generic to a true scalar. Resulting
in the following:

module {
  func.func public @main() -> tensor<32x256xf32> {
    %cst = arith.constant 0.58510226 : f32
    %0 = tensor.empty() : tensor<32x256xf32>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : f32) outs(%0 : tensor<32x256xf32>) {
    ^bb0(%in: f32, %out: f32):
      linalg.yield %in : f32
    } -> tensor<32x256xf32>
    func.return %1 : tensor<32x256xf32>
  }
}

There are many other patterns and checks within Linalg that define a
scalar to be a single value that is not a single element tensor type.

This blocks certain transforms (LinalgSpecialization) and checks.

Take for example the following:
```
module {
  func.func public @main() -> tensor<32x256xf32> {
    %cst = arith.constant dense<0.58510226> : tensor<1xf32>
    %0 = tensor.empty() : tensor<32x256xf32>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : tensor<1xf32>) outs(%0 : tensor<32x256xf32>) {
    ^bb0(%in: f32, %out: f32):
      linalg.yield %in : f32
    } -> tensor<32x256xf32>
    func.return %1 : tensor<32x256xf32>
  }
}
```

The generic in question performs a broadcast from a scalar to a tensor.
However, the transforms and checks that identifies this oepration as a
FillOp do not trigger because the definition of a scalar at the input
is one that is not a RankedTensor, rather a single value.

The following change attempts to reconcile this discrepancy by
converting any known operand on a generic to a true scalar. Resulting
in the following:

```
module {
  func.func public @main() -> tensor<32x256xf32> {
    %cst = arith.constant 0.58510226 : f32
    %0 = tensor.empty() : tensor<32x256xf32>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : f32) outs(%0 : tensor<32x256xf32>) {
    ^bb0(%in: f32, %out: f32):
      linalg.yield %in : f32
    } -> tensor<32x256xf32>
    func.return %1 : tensor<32x256xf32>
  }
}
```
Copy link
Collaborator Author

chrsmcgrr commented May 19, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@chrsmcgrr chrsmcgrr changed the title [Linalg][Canonicalizaion] Add Tensor -> Scalar Pattern [Linalg][Canonicalization] Add Tensor -> Scalar Pattern May 19, 2025
@chrsmcgrr
Copy link
Collaborator Author

Opting for another solution.

@chrsmcgrr chrsmcgrr closed this May 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant