Skip to content

[mlir][bufferization][NFC] Rename to_memref to to_buffer #137180

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions mlir/docs/Bufferization.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,13 +202,13 @@ e.g.:
%2 = "my_dialect.yet_another_op"(%0) : (tensor<?xf32>) -> (tensor<?xf32>)
```

## Tensor / MemRef Boundary
## Tensor / Buffer Boundary

The bufferization dialect provides a few helper ops to connect tensor IR (that
should be bufferized) with existing buffers (that may be allocated/provided by
a different runtime/library/etc.).

`bufferization.to_memref %t` returns the future buffer of a tensor SSA value.
`bufferization.to_buffer %t` returns the future buffer of a tensor SSA value.
`bufferization.to_tensor %m` returns a tensor SSA value for a given MemRef
buffer. `bufferization.materialize_in_destination` indicates that a tensor value
should materialize in a certain buffer.
Expand Down Expand Up @@ -268,7 +268,7 @@ By default, One-Shot Bufferize fails when it encounters an op with tensor
semantics (i.e., tensor result or tensor operand) that is not bufferizable
(i.e., does not implement `BufferizableOpInterface`). This can be avoided with
`allow-unknown-ops`. In that case, One-Shot Bufferize inserts
`to_memref`/`to_tensor` ops around the bufferization boundary.
`to_buffer`/`to_tensor` ops around the bufferization boundary.

One-Shot Bufferize can be configured to bufferize only ops from a set of
dialects with `dialect-filter`.
Expand All @@ -291,7 +291,7 @@ memref. The layout map of the memref type can be controlled with

One-Shot Bufferize bufferizes ops from top to bottom. This works well when all
ops are bufferizable. However, when encountering a non-bufferizable tensor with
`allow-unknown-ops`, One-Shot Bufferize must insert `to_memref` ops at the
`allow-unknown-ops`, One-Shot Bufferize must insert `to_buffer` ops at the
bufferization boundary and decide on a memref type. By default, One-Shot
Bufferize choose the most dynamic memref type wrt. layout maps. E.g.:

Expand All @@ -300,12 +300,12 @@ Bufferize choose the most dynamic memref type wrt. layout maps. E.g.:
%1 = tensor.extract %0[%idx1, %idx2] : tensor<?xf32>
```

When bufferizing the above IR, One-Shot Bufferize inserts a `to_memref` ops with
When bufferizing the above IR, One-Shot Bufferize inserts a `to_buffer` ops with
dynamic offset and strides:

```mlir
%0 = "my_dialect.unbufferizable_op(%t) : (tensor<?x?xf32>) -> (tensor<?x?xf32>)
%0_m = bufferization.to_memref %0 : memref<?x?xf32, strided<[?, ?], offset: ?>>
%0_m = bufferization.to_buffer %0 : memref<?x?xf32, strided<[?, ?], offset: ?>>
%1 = memref.load %0_m[%idx1, %idx2] : memref<?x?xf32, strided<[?, ?], offset: ?>>
```

Expand Down Expand Up @@ -335,7 +335,7 @@ generation of layout maps when no precise layout can be inferred:
* `identity-layout-map` uses static identity layout maps. This option can be
useful for legacy code that cannot handle memref types with layout maps.
Note that this setting can lead to additional buffer copies when folding a
`to_tensor`/`to_memref` pair with memref types that are not cast-compatible.
`to_tensor`/`to_buffer` pair with memref types that are not cast-compatible.

Note: The `unknown-type-conversion` option does not affect layout maps of
function signatures. There is a separate `function-signature-type-conversion`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ struct BufferizationOptions {
Value to) const;

/// Specifies whether not bufferizable ops are allowed in the input. If so,
/// bufferization.to_memref and bufferization.to_tensor ops are inserted at
/// bufferization.to_buffer and bufferization.to_tensor ops are inserted at
/// the boundaries.
bool allowUnknownOps = false;

Expand Down Expand Up @@ -587,7 +587,7 @@ allocateTensorForShapedValue(OpBuilder &b, Location loc, Value shapedValue,
bool copy = true);

/// Lookup the buffer for the given value. If the value was not bufferized
/// yet, wrap it in a ToMemrefOp. Otherwise, it is the result of a ToTensorOp,
/// yet, wrap it in a ToBufferOp. Otherwise, it is the result of a ToTensorOp,
/// from which the memref operand is returned.
FailureOr<Value> getBuffer(RewriterBase &rewriter, Value value,
const BufferizationOptions &options);
Expand Down
8 changes: 4 additions & 4 deletions mlir/include/mlir/Dialect/Bufferization/IR/Bufferization.h
Original file line number Diff line number Diff line change
Expand Up @@ -56,10 +56,10 @@ FailureOr<Value> castOrReallocMemRefValue(OpBuilder &b, Value value,
MemRefType type,
const BufferizationOptions &options);

/// Try to fold to_memref(to_tensor(x)). If x's type and the result type of the
/// to_memref op are different, a memref.cast is needed.
LogicalResult foldToMemrefToTensorPair(RewriterBase &rewriter,
ToMemrefOp toMemref,
/// Try to fold to_buffer(to_tensor(x)). If x's type and the result type of the
/// to_buffer op are different, a memref.cast is needed.
LogicalResult foldToBufferToTensorPair(RewriterBase &rewriter,
ToBufferOp toBuffer,
const BufferizationOptions &options);

/// Add the canonicalization patterns for bufferization.dealloc to the given
Expand Down
12 changes: 6 additions & 6 deletions mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -394,7 +394,7 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [
An operation that creates a tensor from a `memref`. The result value is a
tensor whose shape and element type match the memref operand.

The opposite of this op is `to_memref`. Together, these two ops are
The opposite of this op is `to_buffer`. Together, these two ops are
useful for source/target materializations when doing type conversions
involving tensors and memrefs.

Expand Down Expand Up @@ -459,7 +459,7 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [

LogicalResult bufferize(RewriterBase &rewriter,
const BufferizationOptions &options) const {
// to_tensor/to_memref pairs fold away after bufferization.
// to_tensor/to_buffer pairs fold away after bufferization.
return success();
}

Expand Down Expand Up @@ -490,10 +490,10 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [


//===----------------------------------------------------------------------===//
// ToMemrefOp
// ToBufferOp
//===----------------------------------------------------------------------===//

def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
def Bufferization_ToBufferOp : Bufferization_Op<"to_buffer", [
BufferizableOpInterface,
SameOperandsAndResultShape,
SameOperandsAndResultElementType,
Expand All @@ -507,7 +507,7 @@ def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [

```mlir
// Result type is memref<4x?xf32, #layout, 0>
%m = bufferization.to_memref %t : tensor<4x?xf32> to memref<4x?xf32, #layout, 0>
%m = bufferization.to_buffer %t : tensor<4x?xf32> to memref<4x?xf32, #layout, 0>
```

This operation is a specialized variant of the built-in
Expand All @@ -527,7 +527,7 @@ def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
// BufferizableOpInterface implementation
//===------------------------------------------------------------------===//

// Note: ToMemrefOp / ToTensorOp are temporary ops that are inserted at the
// Note: ToBufferOp / ToTensorOp are temporary ops that are inserted at the
// bufferization boundary. When One-Shot bufferization is complete, there
// should be no such ops left over. If `allowUnknownOps` (or after running a
// partial bufferization pass), such ops may be part of the resulting IR,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ LogicalResult bufferizeOp(Operation *op, const BufferizationOptions &options,
/// Bufferize the signature of `block` and its callers (i.e., ops that have the
/// given block as a successor). All block argument types are changed to memref
/// types. All corresponding operands of all callers are wrapped in
/// bufferization.to_memref ops. All uses of bufferized tensor block arguments
/// bufferization.to_buffer ops. All uses of bufferized tensor block arguments
/// are wrapped in bufferization.to_tensor ops.
///
/// It is expected that all callers implement the `BranchOpInterface`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def OwnershipBasedBufferDeallocationPass
Otherwise, the pass that bufferizes the remaining tensors is responsible to
add the corresponding deallocation operations. Note that this pass does not
consider any values of tensor type and assumes that MemRef values defined by
`bufferization.to_memref` do not return ownership and do not have to be
`bufferization.to_buffer` do not return ownership and do not have to be
deallocated. `bufferization.to_tensor` operations are handled similarly to
`bufferization.clone` operations with the exception that the result value is
not handled because it's a tensor (not a MemRef).
Expand Down Expand Up @@ -321,7 +321,7 @@ def OneShotBufferizePass : Pass<"one-shot-bufferize", "ModuleOp"> {

One-Shot Bufferize will by default reject IR that contains non-bufferizable
op, i.e., ops that do not implemement BufferizableOpInterface. Such IR can
be allowed with `allow-unknown-ops=1`. In that case, to_memref and to_tensor
be allowed with `allow-unknown-ops=1`. In that case, to_buffer and to_tensor
ops will be generated at the bufferization boundary. This is useful for
compatibility with existing partial bufferization passes: These can
bufferize the remaining IR after running One-Shot Bufferize.
Expand All @@ -341,7 +341,7 @@ def OneShotBufferizePass : Pass<"one-shot-bufferize", "ModuleOp"> {

One-Shot Bufferize will by default assume memref types with fully dynamic
layout maps when a precise layout cannot be inferred. E.g., this is the case
when wrapping a non-bufferizable op in to_memref/to_tensor ops. This
when wrapping a non-bufferizable op in to_buffer/to_tensor ops. This
behavior can be overridden with `unknown-type-conversion`. Valid values are
`fully-dynamic-layout-map` and `identity-layout-map`.

Expand Down
16 changes: 8 additions & 8 deletions mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -266,9 +266,9 @@ def SparseTensor_ToPositionsOp : SparseTensor_Op<"positions",
let summary = "Extracts the `level`-th positions array of the `tensor`";
let description = [{
Returns the positions array of the tensor's storage at the given
level. This is similar to the `bufferization.to_memref` operation
level. This is similar to the `bufferization.to_buffer` operation
in the sense that it provides a bridge between a tensor world view
and a bufferized world view. Unlike the `bufferization.to_memref`
and a bufferized world view. Unlike the `bufferization.to_buffer`
operation, however, this sparse operation actually lowers into code
that extracts the positions array from the sparse storage itself
(either by calling a support library or through direct code).
Expand All @@ -295,9 +295,9 @@ def SparseTensor_ToCoordinatesOp : SparseTensor_Op<"coordinates",
let summary = "Extracts the `level`-th coordinates array of the `tensor`";
let description = [{
Returns the coordinates array of the tensor's storage at the given
level. This is similar to the `bufferization.to_memref` operation
level. This is similar to the `bufferization.to_buffer` operation
in the sense that it provides a bridge between a tensor world view
and a bufferized world view. Unlike the `bufferization.to_memref`
and a bufferized world view. Unlike the `bufferization.to_buffer`
operation, however, this sparse operation actually lowers into code
that extracts the coordinates array from the sparse storage itself
(either by calling a support library or through direct code).
Expand Down Expand Up @@ -326,9 +326,9 @@ def SparseTensor_ToCoordinatesBufferOp : SparseTensor_Op<"coordinates_buffer",
Returns the linear coordinates array for a sparse tensor with
a trailing COO region with at least two levels. It is an error
if the tensor doesn't contain such a COO region. This is similar
to the `bufferization.to_memref` operation in the sense that it
to the `bufferization.to_buffer` operation in the sense that it
provides a bridge between a tensor world view and a bufferized
world view. Unlike the `bufferization.to_memref` operation,
world view. Unlike the `bufferization.to_buffer` operation,
however, this operation actually lowers into code that extracts
the linear coordinates array from the sparse storage scheme that
stores the coordinates for the COO region as an array of structures.
Expand Down Expand Up @@ -359,9 +359,9 @@ def SparseTensor_ToValuesOp : SparseTensor_Op<"values",
let description = [{
Returns the values array of the sparse storage format for the given
sparse tensor, independent of the actual dimension. This is similar to
the `bufferization.to_memref` operation in the sense that it provides a bridge
the `bufferization.to_buffer` operation in the sense that it provides a bridge
between a tensor world view and a bufferized world view. Unlike the
`bufferization.to_memref` operation, however, this sparse operation actually
`bufferization.to_buffer` operation, however, this sparse operation actually
lowers into code that extracts the values array from the sparse storage
scheme (either by calling a support library or through direct code).

Expand Down
2 changes: 1 addition & 1 deletion mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -576,7 +576,7 @@ struct ConvertUpdateHaloOp : public OpConversionPattern<UpdateHaloOp> {
auto tensorType = MemRefType::get(
dstShape, cast<ShapedType>(array.getType()).getElementType());
array =
rewriter.create<bufferization::ToMemrefOp>(loc, tensorType, array);
rewriter.create<bufferization::ToBufferOp>(loc, tensorType, array);
}
auto rank = cast<ShapedType>(array.getType()).getRank();
auto opSplitAxes = adaptor.getSplitAxes().getAxes();
Expand Down
16 changes: 8 additions & 8 deletions mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -624,8 +624,8 @@ bool AnalysisState::canOmitTensorCopy(OpOperand &opOperand) const {
}

bool AnalysisState::isInPlace(OpOperand &opOperand) const {
// ToMemrefOps are always in-place.
if (isa<ToMemrefOp>(opOperand.getOwner()))
// ToBufferOps are always in-place.
if (isa<ToBufferOp>(opOperand.getOwner()))
return true;

// In the absence of analysis information, OpOperands that bufferize to a
Expand All @@ -650,13 +650,13 @@ bool AnalysisState::hasUndefinedContents(OpOperand *opOperand) const {
return false;
}

// bufferization.to_memref is not allowed to change the rank.
static void ensureToMemrefOpIsValid(Value tensor, Type memrefType) {
// bufferization.to_buffer is not allowed to change the rank.
static void ensureToBufferOpIsValid(Value tensor, Type memrefType) {
#ifndef NDEBUG
auto rankedTensorType = llvm::dyn_cast<RankedTensorType>(tensor.getType());
assert((!rankedTensorType || llvm::cast<MemRefType>(memrefType).getRank() ==
rankedTensorType.getRank()) &&
"to_memref would be invalid: mismatching ranks");
"to_buffer would be invalid: mismatching ranks");
#endif
}

Expand All @@ -671,15 +671,15 @@ FailureOr<Value> bufferization::getBuffer(RewriterBase &rewriter, Value value,
if (auto toTensorOp = value.getDefiningOp<bufferization::ToTensorOp>())
return toTensorOp.getMemref();

// Insert to_memref op.
// Insert to_buffer op.
OpBuilder::InsertionGuard g(rewriter);
setInsertionPointAfter(rewriter, value);
FailureOr<BaseMemRefType> memrefType = getBufferType(value, options);
if (failed(memrefType))
return failure();
ensureToMemrefOpIsValid(value, *memrefType);
ensureToBufferOpIsValid(value, *memrefType);
return rewriter
.create<bufferization::ToMemrefOp>(value.getLoc(), *memrefType, value)
.create<bufferization::ToBufferOp>(value.getLoc(), *memrefType, value)
.getResult();
}

Expand Down
Loading