Skip to content

Commit 14e7654

Browse files
[mlir][bufferization][NFC] Rename to_memref to to_buffer
As part of the work on transitioning bufferization dialect, ops, and associated logic to operate on newly added type interfaces (see 00eaff3), rename the bufferization.to_memref to highlight the generic nature of the op. Bufferization process produces buffers while memref is a builtin type rather than a generic term. Preserve the current API (to_buffer still produces a memref), however, as the new type interfaces are not used yet.
1 parent 86d8e8d commit 14e7654

File tree

85 files changed

+511
-516
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+511
-516
lines changed

mlir/docs/Bufferization.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -202,13 +202,13 @@ e.g.:
202202
%2 = "my_dialect.yet_another_op"(%0) : (tensor<?xf32>) -> (tensor<?xf32>)
203203
```
204204

205-
## Tensor / MemRef Boundary
205+
## Tensor / Buffer Boundary
206206

207207
The bufferization dialect provides a few helper ops to connect tensor IR (that
208208
should be bufferized) with existing buffers (that may be allocated/provided by
209209
a different runtime/library/etc.).
210210

211-
`bufferization.to_memref %t` returns the future buffer of a tensor SSA value.
211+
`bufferization.to_buffer %t` returns the future buffer of a tensor SSA value.
212212
`bufferization.to_tensor %m` returns a tensor SSA value for a given MemRef
213213
buffer. `bufferization.materialize_in_destination` indicates that a tensor value
214214
should materialize in a certain buffer.
@@ -268,7 +268,7 @@ By default, One-Shot Bufferize fails when it encounters an op with tensor
268268
semantics (i.e., tensor result or tensor operand) that is not bufferizable
269269
(i.e., does not implement `BufferizableOpInterface`). This can be avoided with
270270
`allow-unknown-ops`. In that case, One-Shot Bufferize inserts
271-
`to_memref`/`to_tensor` ops around the bufferization boundary.
271+
`to_buffer`/`to_tensor` ops around the bufferization boundary.
272272

273273
One-Shot Bufferize can be configured to bufferize only ops from a set of
274274
dialects with `dialect-filter`.
@@ -291,7 +291,7 @@ memref. The layout map of the memref type can be controlled with
291291

292292
One-Shot Bufferize bufferizes ops from top to bottom. This works well when all
293293
ops are bufferizable. However, when encountering a non-bufferizable tensor with
294-
`allow-unknown-ops`, One-Shot Bufferize must insert `to_memref` ops at the
294+
`allow-unknown-ops`, One-Shot Bufferize must insert `to_buffer` ops at the
295295
bufferization boundary and decide on a memref type. By default, One-Shot
296296
Bufferize choose the most dynamic memref type wrt. layout maps. E.g.:
297297

@@ -300,12 +300,12 @@ Bufferize choose the most dynamic memref type wrt. layout maps. E.g.:
300300
%1 = tensor.extract %0[%idx1, %idx2] : tensor<?xf32>
301301
```
302302

303-
When bufferizing the above IR, One-Shot Bufferize inserts a `to_memref` ops with
303+
When bufferizing the above IR, One-Shot Bufferize inserts a `to_buffer` ops with
304304
dynamic offset and strides:
305305

306306
```mlir
307307
%0 = "my_dialect.unbufferizable_op(%t) : (tensor<?x?xf32>) -> (tensor<?x?xf32>)
308-
%0_m = bufferization.to_memref %0 : memref<?x?xf32, strided<[?, ?], offset: ?>>
308+
%0_m = bufferization.to_buffer %0 : memref<?x?xf32, strided<[?, ?], offset: ?>>
309309
%1 = memref.load %0_m[%idx1, %idx2] : memref<?x?xf32, strided<[?, ?], offset: ?>>
310310
```
311311

@@ -335,7 +335,7 @@ generation of layout maps when no precise layout can be inferred:
335335
* `identity-layout-map` uses static identity layout maps. This option can be
336336
useful for legacy code that cannot handle memref types with layout maps.
337337
Note that this setting can lead to additional buffer copies when folding a
338-
`to_tensor`/`to_memref` pair with memref types that are not cast-compatible.
338+
`to_tensor`/`to_buffer` pair with memref types that are not cast-compatible.
339339

340340
Note: The `unknown-type-conversion` option does not affect layout maps of
341341
function signatures. There is a separate `function-signature-type-conversion`

mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -302,7 +302,7 @@ struct BufferizationOptions {
302302
Value to) const;
303303

304304
/// Specifies whether not bufferizable ops are allowed in the input. If so,
305-
/// bufferization.to_memref and bufferization.to_tensor ops are inserted at
305+
/// bufferization.to_buffer and bufferization.to_tensor ops are inserted at
306306
/// the boundaries.
307307
bool allowUnknownOps = false;
308308

@@ -587,7 +587,7 @@ allocateTensorForShapedValue(OpBuilder &b, Location loc, Value shapedValue,
587587
bool copy = true);
588588

589589
/// Lookup the buffer for the given value. If the value was not bufferized
590-
/// yet, wrap it in a ToMemrefOp. Otherwise, it is the result of a ToTensorOp,
590+
/// yet, wrap it in a ToBufferOp. Otherwise, it is the result of a ToTensorOp,
591591
/// from which the memref operand is returned.
592592
FailureOr<Value> getBuffer(RewriterBase &rewriter, Value value,
593593
const BufferizationOptions &options);

mlir/include/mlir/Dialect/Bufferization/IR/Bufferization.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,10 +56,10 @@ FailureOr<Value> castOrReallocMemRefValue(OpBuilder &b, Value value,
5656
MemRefType type,
5757
const BufferizationOptions &options);
5858

59-
/// Try to fold to_memref(to_tensor(x)). If x's type and the result type of the
60-
/// to_memref op are different, a memref.cast is needed.
61-
LogicalResult foldToMemrefToTensorPair(RewriterBase &rewriter,
62-
ToMemrefOp toMemref,
59+
/// Try to fold to_buffer(to_tensor(x)). If x's type and the result type of the
60+
/// to_buffer op are different, a memref.cast is needed.
61+
LogicalResult foldToBufferToTensorPair(RewriterBase &rewriter,
62+
ToBufferOp toBuffer,
6363
const BufferizationOptions &options);
6464

6565
/// Add the canonicalization patterns for bufferization.dealloc to the given

mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -394,7 +394,7 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [
394394
An operation that creates a tensor from a `memref`. The result value is a
395395
tensor whose shape and element type match the memref operand.
396396

397-
The opposite of this op is `to_memref`. Together, these two ops are
397+
The opposite of this op is `to_buffer`. Together, these two ops are
398398
useful for source/target materializations when doing type conversions
399399
involving tensors and memrefs.
400400

@@ -459,7 +459,7 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [
459459

460460
LogicalResult bufferize(RewriterBase &rewriter,
461461
const BufferizationOptions &options) const {
462-
// to_tensor/to_memref pairs fold away after bufferization.
462+
// to_tensor/to_buffer pairs fold away after bufferization.
463463
return success();
464464
}
465465

@@ -490,10 +490,10 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [
490490

491491

492492
//===----------------------------------------------------------------------===//
493-
// ToMemrefOp
493+
// ToBufferOp
494494
//===----------------------------------------------------------------------===//
495495

496-
def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
496+
def Bufferization_ToBufferOp : Bufferization_Op<"to_buffer", [
497497
BufferizableOpInterface,
498498
SameOperandsAndResultShape,
499499
SameOperandsAndResultElementType,
@@ -507,7 +507,7 @@ def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
507507

508508
```mlir
509509
// Result type is memref<4x?xf32, #layout, 0>
510-
%m = bufferization.to_memref %t : tensor<4x?xf32> to memref<4x?xf32, #layout, 0>
510+
%m = bufferization.to_buffer %t : tensor<4x?xf32> to memref<4x?xf32, #layout, 0>
511511
```
512512

513513
This operation is a specialized variant of the built-in
@@ -527,7 +527,7 @@ def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
527527
// BufferizableOpInterface implementation
528528
//===------------------------------------------------------------------===//
529529

530-
// Note: ToMemrefOp / ToTensorOp are temporary ops that are inserted at the
530+
// Note: ToBufferOp / ToTensorOp are temporary ops that are inserted at the
531531
// bufferization boundary. When One-Shot bufferization is complete, there
532532
// should be no such ops left over. If `allowUnknownOps` (or after running a
533533
// partial bufferization pass), such ops may be part of the resulting IR,

mlir/include/mlir/Dialect/Bufferization/Transforms/Bufferize.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ LogicalResult bufferizeOp(Operation *op, const BufferizationOptions &options,
5050
/// Bufferize the signature of `block` and its callers (i.e., ops that have the
5151
/// given block as a successor). All block argument types are changed to memref
5252
/// types. All corresponding operands of all callers are wrapped in
53-
/// bufferization.to_memref ops. All uses of bufferized tensor block arguments
53+
/// bufferization.to_buffer ops. All uses of bufferized tensor block arguments
5454
/// are wrapped in bufferization.to_tensor ops.
5555
///
5656
/// It is expected that all callers implement the `BranchOpInterface`.

mlir/include/mlir/Dialect/Bufferization/Transforms/Passes.td

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def OwnershipBasedBufferDeallocationPass
4747
Otherwise, the pass that bufferizes the remaining tensors is responsible to
4848
add the corresponding deallocation operations. Note that this pass does not
4949
consider any values of tensor type and assumes that MemRef values defined by
50-
`bufferization.to_memref` do not return ownership and do not have to be
50+
`bufferization.to_buffer` do not return ownership and do not have to be
5151
deallocated. `bufferization.to_tensor` operations are handled similarly to
5252
`bufferization.clone` operations with the exception that the result value is
5353
not handled because it's a tensor (not a MemRef).
@@ -321,7 +321,7 @@ def OneShotBufferizePass : Pass<"one-shot-bufferize", "ModuleOp"> {
321321

322322
One-Shot Bufferize will by default reject IR that contains non-bufferizable
323323
op, i.e., ops that do not implemement BufferizableOpInterface. Such IR can
324-
be allowed with `allow-unknown-ops=1`. In that case, to_memref and to_tensor
324+
be allowed with `allow-unknown-ops=1`. In that case, to_buffer and to_tensor
325325
ops will be generated at the bufferization boundary. This is useful for
326326
compatibility with existing partial bufferization passes: These can
327327
bufferize the remaining IR after running One-Shot Bufferize.
@@ -341,7 +341,7 @@ def OneShotBufferizePass : Pass<"one-shot-bufferize", "ModuleOp"> {
341341

342342
One-Shot Bufferize will by default assume memref types with fully dynamic
343343
layout maps when a precise layout cannot be inferred. E.g., this is the case
344-
when wrapping a non-bufferizable op in to_memref/to_tensor ops. This
344+
when wrapping a non-bufferizable op in to_buffer/to_tensor ops. This
345345
behavior can be overridden with `unknown-type-conversion`. Valid values are
346346
`fully-dynamic-layout-map` and `identity-layout-map`.
347347

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -266,9 +266,9 @@ def SparseTensor_ToPositionsOp : SparseTensor_Op<"positions",
266266
let summary = "Extracts the `level`-th positions array of the `tensor`";
267267
let description = [{
268268
Returns the positions array of the tensor's storage at the given
269-
level. This is similar to the `bufferization.to_memref` operation
269+
level. This is similar to the `bufferization.to_buffer` operation
270270
in the sense that it provides a bridge between a tensor world view
271-
and a bufferized world view. Unlike the `bufferization.to_memref`
271+
and a bufferized world view. Unlike the `bufferization.to_buffer`
272272
operation, however, this sparse operation actually lowers into code
273273
that extracts the positions array from the sparse storage itself
274274
(either by calling a support library or through direct code).
@@ -295,9 +295,9 @@ def SparseTensor_ToCoordinatesOp : SparseTensor_Op<"coordinates",
295295
let summary = "Extracts the `level`-th coordinates array of the `tensor`";
296296
let description = [{
297297
Returns the coordinates array of the tensor's storage at the given
298-
level. This is similar to the `bufferization.to_memref` operation
298+
level. This is similar to the `bufferization.to_buffer` operation
299299
in the sense that it provides a bridge between a tensor world view
300-
and a bufferized world view. Unlike the `bufferization.to_memref`
300+
and a bufferized world view. Unlike the `bufferization.to_buffer`
301301
operation, however, this sparse operation actually lowers into code
302302
that extracts the coordinates array from the sparse storage itself
303303
(either by calling a support library or through direct code).
@@ -326,9 +326,9 @@ def SparseTensor_ToCoordinatesBufferOp : SparseTensor_Op<"coordinates_buffer",
326326
Returns the linear coordinates array for a sparse tensor with
327327
a trailing COO region with at least two levels. It is an error
328328
if the tensor doesn't contain such a COO region. This is similar
329-
to the `bufferization.to_memref` operation in the sense that it
329+
to the `bufferization.to_buffer` operation in the sense that it
330330
provides a bridge between a tensor world view and a bufferized
331-
world view. Unlike the `bufferization.to_memref` operation,
331+
world view. Unlike the `bufferization.to_buffer` operation,
332332
however, this operation actually lowers into code that extracts
333333
the linear coordinates array from the sparse storage scheme that
334334
stores the coordinates for the COO region as an array of structures.
@@ -359,9 +359,9 @@ def SparseTensor_ToValuesOp : SparseTensor_Op<"values",
359359
let description = [{
360360
Returns the values array of the sparse storage format for the given
361361
sparse tensor, independent of the actual dimension. This is similar to
362-
the `bufferization.to_memref` operation in the sense that it provides a bridge
362+
the `bufferization.to_buffer` operation in the sense that it provides a bridge
363363
between a tensor world view and a bufferized world view. Unlike the
364-
`bufferization.to_memref` operation, however, this sparse operation actually
364+
`bufferization.to_buffer` operation, however, this sparse operation actually
365365
lowers into code that extracts the values array from the sparse storage
366366
scheme (either by calling a support library or through direct code).
367367

mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ struct ConvertUpdateHaloOp : public OpConversionPattern<UpdateHaloOp> {
576576
auto tensorType = MemRefType::get(
577577
dstShape, cast<ShapedType>(array.getType()).getElementType());
578578
array =
579-
rewriter.create<bufferization::ToMemrefOp>(loc, tensorType, array);
579+
rewriter.create<bufferization::ToBufferOp>(loc, tensorType, array);
580580
}
581581
auto rank = cast<ShapedType>(array.getType()).getRank();
582582
auto opSplitAxes = adaptor.getSplitAxes().getAxes();

mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -624,8 +624,8 @@ bool AnalysisState::canOmitTensorCopy(OpOperand &opOperand) const {
624624
}
625625

626626
bool AnalysisState::isInPlace(OpOperand &opOperand) const {
627-
// ToMemrefOps are always in-place.
628-
if (isa<ToMemrefOp>(opOperand.getOwner()))
627+
// ToBufferOps are always in-place.
628+
if (isa<ToBufferOp>(opOperand.getOwner()))
629629
return true;
630630

631631
// In the absence of analysis information, OpOperands that bufferize to a
@@ -650,13 +650,13 @@ bool AnalysisState::hasUndefinedContents(OpOperand *opOperand) const {
650650
return false;
651651
}
652652

653-
// bufferization.to_memref is not allowed to change the rank.
654-
static void ensureToMemrefOpIsValid(Value tensor, Type memrefType) {
653+
// bufferization.to_buffer is not allowed to change the rank.
654+
static void ensureToBufferOpIsValid(Value tensor, Type memrefType) {
655655
#ifndef NDEBUG
656656
auto rankedTensorType = llvm::dyn_cast<RankedTensorType>(tensor.getType());
657657
assert((!rankedTensorType || llvm::cast<MemRefType>(memrefType).getRank() ==
658658
rankedTensorType.getRank()) &&
659-
"to_memref would be invalid: mismatching ranks");
659+
"to_buffer would be invalid: mismatching ranks");
660660
#endif
661661
}
662662

@@ -671,15 +671,15 @@ FailureOr<Value> bufferization::getBuffer(RewriterBase &rewriter, Value value,
671671
if (auto toTensorOp = value.getDefiningOp<bufferization::ToTensorOp>())
672672
return toTensorOp.getMemref();
673673

674-
// Insert to_memref op.
674+
// Insert to_buffer op.
675675
OpBuilder::InsertionGuard g(rewriter);
676676
setInsertionPointAfter(rewriter, value);
677677
FailureOr<BaseMemRefType> memrefType = getBufferType(value, options);
678678
if (failed(memrefType))
679679
return failure();
680-
ensureToMemrefOpIsValid(value, *memrefType);
680+
ensureToBufferOpIsValid(value, *memrefType);
681681
return rewriter
682-
.create<bufferization::ToMemrefOp>(value.getLoc(), *memrefType, value)
682+
.create<bufferization::ToBufferOp>(value.getLoc(), *memrefType, value)
683683
.getResult();
684684
}
685685

0 commit comments

Comments
 (0)