Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][sparse] update doc and examples of the [dis]assemble operations #88213

Merged
merged 4 commits into from
Apr 10, 2024

Conversation

aartbik
Copy link
Contributor

@aartbik aartbik commented Apr 9, 2024

The doc and examples of the [dis]assemble operations did not reflect all the recent changes on order of the operands. Also clarified some of the text.

The doc and examples of the [dis]assemble operations did
not reflect all the recent changes on order of the operands.
Also clarified some of the text.
@llvmbot
Copy link
Member

llvmbot commented Apr 9, 2024

@llvm/pr-subscribers-mlir-sparse

Author: Aart Bik (aartbik)

Changes

The doc and examples of the [dis]assemble operations did not reflect all the recent changes on order of the operands. Also clarified some of the text.


Full diff: https://github.com/llvm/llvm-project/pull/88213.diff

4 Files Affected:

  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td (+42-45)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+8-2)
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
index 5df8a176459b7c..b7baf2d81db1e0 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
@@ -61,34 +61,32 @@ def SparseTensor_AssembleOp : SparseTensor_Op<"assemble", [Pure]>,
   let summary = "Returns a sparse tensor assembled from the given values and levels";
 
   let description = [{
-    Assembles the values and per-level coordinate or postion arrays into a sparse tensor.
-    The order and types of provided levels must be consistent with the actual storage
-    layout of the returned sparse tensor described below.
+    Assembles the per-level position and coordinate arrays together with
+    the values arrays into a sparse tensor. The order and types of the
+    provided levels must be consistent with the actual storage layout of
+    the returned sparse tensor described below.
 
-    - `values : tensor<? x V>`
-      supplies the value for each stored element in the sparse tensor.
     - `levels: [tensor<? x iType>, ...]`
-      each supplies the sparse tensor coordinates scheme in the sparse tensor for
-      the corresponding level as specifed by `sparse_tensor::StorageLayout`.
-
-    This operation can be used to assemble a sparse tensor from external
-    sources; e.g., when passing two numpy arrays from Python.
-
-    Disclaimer: This is the user's responsibility to provide input that can be
-    correctly interpreted by the sparsifier, which does not perform
-    any sanity test during runtime to verify data integrity.
+      supplies the sparse tensor position and coordinate arrays
+      of the sparse tensor for the corresponding level as specifed by
+      `sparse_tensor::StorageLayout`.
+    - `values : tensor<? x V>`
+      supplies the values array for the stored elements in the sparse tensor.
 
-    TODO: The returned tensor is allowed (in principle) to have non-identity
-    dimOrdering/higherOrdering mappings.  However, the current implementation
-    does not yet support them.
+    This operation can be used to assemble a sparse tensor from an
+    external source; e.g., by passing numpy arrays from Python. It
+    is the user's responsibility to provide input that can be correctly
+    interpreted by the sparsifier, which does not perform any sanity
+    test to verify data integrity.
 
     Example:
 
     ```mlir
-    %values      = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
-    %coordinates = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
-    %st = sparse_tensor.assemble %values, %coordinates
-        : tensor<3xf64>, tensor<3x2xindex> to tensor<3x4xf64, #COO>
+    %pos    = arith.constant dense<[0, 3]>                : tensor<2xindex>
+    %index  = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
+    %values = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
+    %s = sparse_tensor.assemble (%pos, %index), %values
+       : (tensor<2xindex>, tensor<3x2xindex>), tensor<3xf64> to tensor<3x4xf64, #COO>
     // yields COO format |1.1, 0.0, 0.0, 0.0|
     //     of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
     //                   |0.0, 0.0, 0.0, 0.0|
@@ -96,8 +94,8 @@ def SparseTensor_AssembleOp : SparseTensor_Op<"assemble", [Pure]>,
   }];
 
   let assemblyFormat =
-    "` ` `(` $levels `)` `,` $values attr-dict"
-    " `:` `(` type($levels) `)` `,` type($values) `to` type($result)";
+    "` ` `(` $levels       `)` `,` $values attr-dict `:`"
+    "    `(` type($levels) `)` `,` type($values) `to` type($result)";
 
   let hasVerifier = 1;
 }
@@ -110,21 +108,20 @@ def SparseTensor_DisassembleOp : SparseTensor_Op<"disassemble", [Pure, SameVaria
                   TensorOf<[AnyType]>:$ret_values,
                   Variadic<AnyIndexingScalarLike>:$lvl_lens,
                   AnyIndexingScalarLike:$val_len)> {
-  let summary = "Returns the (values, coordinates) pair disassembled from the input tensor";
+  let summary = "Copies the values and levels of the given sparse tensor";
 
   let description = [{
     The disassemble operation is the inverse of `sparse_tensor::assemble`.
-    It returns the values and per-level position and coordinate array to the
-    user from the sparse tensor along with the actual length of the memory used
-    in each returned buffer. This operation can be used for returning an
-    disassembled MLIR sparse tensor to frontend; e.g., returning two numpy arrays
-    to Python.
-
-    Disclaimer: This is the user's responsibility to allocate large enough buffers
-    to hold the sparse tensor. The sparsifier simply copies each fields
-    of the sparse tensor into the user-supplied buffer without bound checking.
+    It copies the values and per-level position and coordinate arrays of
+    the given sparse tensor into the user-supplied buffers along with the
+    actual length of the memory used in each returned tensor.
 
-    TODO: the current implementation does not yet support non-identity mappings.
+    This operation can be used for returning a disassembled MLIR sparse tensor;
+    e.g., copying the sparse tensor contents into pre-allocated numpy arrays
+    back to Python. It is the user's responsibility to allocate large enough
+    buffers of the appropriate types to hold the sparse tensor contents.
+    The sparsifier simply copies all fields of the sparse tensor into the
+    user-supplied buffers without any sanity test to verify data integrity.
 
     Example:
 
@@ -132,26 +129,26 @@ def SparseTensor_DisassembleOp : SparseTensor_Op<"disassemble", [Pure, SameVaria
     // input COO format |1.1, 0.0, 0.0, 0.0|
     //    of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
     //                  |0.0, 0.0, 0.0, 0.0|
-    %v, %p, %c, %v_len, %p_len, %c_len =
-        sparse_tensor.disassemble %sp : tensor<3x4xf64, #COO>
-          out_lvls(%op, %oi) : tensor<2xindex>, tensor<3x2xindex>,
-          out_vals(%od) : tensor<3xf64> ->
-          tensor<3xf64>, (tensor<2xindex>, tensor<3x2xindex>), index, (index, index)
-    // %v = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
+    %p, %c, %v, %p_len, %c_len, %v_len =
+      sparse_tensor.disassemble %s : tensor<3x4xf64, #COO>
+         out_lvls(%op, %oi : tensor<2xindex>, tensor<3x2xindex>)
+         out_vals(%od : tensor<3xf64>) ->
+           (tensor<2xindex>, tensor<3x2xindex>), tensor<3xf64>, (index, index), index
     // %p = arith.constant dense<[ 0,              3 ]> : tensor<2xindex>
     // %c = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
-    // %v_len = 3
+    // %v = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
     // %p_len = 2
     // %c_len = 6 (3x2)
+    // %v_len = 3
     ```
   }];
 
   let assemblyFormat =
-    "$tensor `:` type($tensor) "
+    "$tensor attr-dict `:` type($tensor)"
     "`out_lvls` `(` $out_levels `:` type($out_levels) `)` "
-    "`out_vals` `(` $out_values `:` type($out_values) `)` attr-dict"
-    "`->` `(` type($ret_levels) `)` `,` type($ret_values) `,` "
-    "`(` type($lvl_lens) `)` `,` type($val_len)";
+    "`out_vals` `(` $out_values `:` type($out_values) `)` `->`"
+    "`(` type($ret_levels) `)` `,` type($ret_values) `,` "
+    "`(` type($lvl_lens)   `)` `,` type($val_len)";
 
   let hasVerifier = 1;
 }
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 18851f29d8eaa3..7f5c05190fc9a2 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -60,7 +60,7 @@ func.func @invalid_pack_mis_position(%values: tensor<6xf64>, %coordinates: tenso
 
 func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x1xi32>) {
   // expected-error@+1 {{input/output element-types don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100xf32, #SparseVector>
+  %rp, %rc, %rv, %pl, %cl, %vl = sparse_tensor.disassemble %sp : tensor<100xf32, #SparseVector>
                   out_lvls(%pos, %coordinates : tensor<2xi32>, tensor<6x1xi32>)
                   out_vals(%values : tensor<6xf64>)
                   -> (tensor<2xi32>, tensor<6x1xi32>), tensor<6xf64>, (index, index), index
@@ -73,7 +73,7 @@ func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: ten
 
 func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>) {
   // expected-error@+1 {{input/output trailing COO level-ranks don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100x2xf64, #SparseVector>
+  %rp, %rc, %rv, %pl, %cl, %vl = sparse_tensor.disassemble %sp : tensor<100x2xf64, #SparseVector>
                   out_lvls(%pos, %coordinates : tensor<2xi32>, tensor<6x3xi32> )
                   out_vals(%values : tensor<6xf64>)
                   -> (tensor<2xi32>, tensor<6x3xi32>), tensor<6xf64>, (index, index), index
@@ -86,7 +86,7 @@ func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: t
 
 func.func @invalid_unpack_mis_position(%sp: tensor<2x100xf64, #CSR>, %values: tensor<6xf64>, %coordinates: tensor<6xi32>) {
   // expected-error@+1 {{inconsistent number of fields between input/output}}
-  %rv, %rc, %vl, %pl = sparse_tensor.disassemble %sp : tensor<2x100xf64, #CSR>
+  %rc, %rv, %cl, %vl = sparse_tensor.disassemble %sp : tensor<2x100xf64, #CSR>
              out_lvls(%coordinates : tensor<6xi32>)
              out_vals(%values : tensor<6xf64>)
              -> (tensor<6xi32>), tensor<6xf64>, (index), index
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip.mlir b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
index a47a3d5119f96d..12f69c1d37b9cd 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
@@ -33,21 +33,21 @@ func.func @sparse_pack(%pos: tensor<2xi32>, %index: tensor<6x1xi32>, %data: tens
 #SparseVector = #sparse_tensor.encoding<{map = (d0) -> (d0 : compressed), crdWidth=32}>
 // CHECK-LABEL: func @sparse_unpack(
 //  CHECK-SAME: %[[T:.*]]: tensor<100xf64, #
-//  CHECK-SAME: %[[OD:.*]]: tensor<6xf64>
-//  CHECK-SAME: %[[OP:.*]]: tensor<2xindex>
-//  CHECK-SAME: %[[OI:.*]]: tensor<6x1xi32>
+//  CHECK-SAME: %[[OP:.*]]: tensor<2xindex>,
+//  CHECK-SAME: %[[OI:.*]]: tensor<6x1xi32>,
+//  CHECK-SAME: %[[OD:.*]]: tensor<6xf64>)
 //       CHECK: %[[P:.*]]:2, %[[D:.*]], %[[PL:.*]]:2, %[[DL:.*]] = sparse_tensor.disassemble %[[T]]
 //       CHECK: return %[[P]]#0, %[[P]]#1, %[[D]]
 func.func @sparse_unpack(%sp : tensor<100xf64, #SparseVector>,
-                         %od : tensor<6xf64>,
                          %op : tensor<2xindex>,
-                         %oi : tensor<6x1xi32>)
+                         %oi : tensor<6x1xi32>,
+                         %od : tensor<6xf64>)
                        -> (tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>) {
-  %rp, %ri, %rd, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100xf64, #SparseVector>
+  %rp, %ri, %d, %rpl, %ril, %dl = sparse_tensor.disassemble %sp : tensor<100xf64, #SparseVector>
                   out_lvls(%op, %oi : tensor<2xindex>, tensor<6x1xi32>)
                   out_vals(%od : tensor<6xf64>)
                   -> (tensor<2xindex>, tensor<6x1xi32>), tensor<6xf64>, (index, index), index
-  return %rp, %ri, %rd : tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>
+  return %rp, %ri, %d : tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>
 }
 
 // -----
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
index 7ecccad212cdbe..5415625ff05d6d 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
@@ -231,7 +231,7 @@ module {
     %od = tensor.empty() : tensor<3xf64>
     %op = tensor.empty() : tensor<2xi32>
     %oi = tensor.empty() : tensor<3x2xi32>
-    %p, %i, %d, %dl, %pl, %il = sparse_tensor.disassemble %s5 : tensor<10x10xf64, #SortedCOOI32>
+    %p, %i, %d, %pl, %il, %dl = sparse_tensor.disassemble %s5 : tensor<10x10xf64, #SortedCOOI32>
                  out_lvls(%op, %oi : tensor<2xi32>, tensor<3x2xi32>)
                  out_vals(%od : tensor<3xf64>)
                  -> (tensor<2xi32>, tensor<3x2xi32>), tensor<3xf64>, (i32, i64), index
@@ -244,10 +244,13 @@ module {
     %vi = vector.transfer_read %i[%c0, %c0], %i0 : tensor<3x2xi32>, vector<3x2xi32>
     vector.print %vi : vector<3x2xi32>
 
+    // CHECK-NEXT: 3
+    vector.print %dl : index
+
     %d_csr = tensor.empty() : tensor<4xf64>
     %p_csr = tensor.empty() : tensor<3xi32>
     %i_csr = tensor.empty() : tensor<3xi32>
-    %rp_csr, %ri_csr, %rd_csr, %ld_csr, %lp_csr, %li_csr = sparse_tensor.disassemble %csr : tensor<2x2xf64, #CSR>
+    %rp_csr, %ri_csr, %rd_csr, %lp_csr, %li_csr, %ld_csr = sparse_tensor.disassemble %csr : tensor<2x2xf64, #CSR>
                  out_lvls(%p_csr, %i_csr : tensor<3xi32>, tensor<3xi32>)
                  out_vals(%d_csr : tensor<4xf64>)
                  -> (tensor<3xi32>, tensor<3xi32>), tensor<4xf64>, (i32, i64), index
@@ -256,6 +259,9 @@ module {
     %vd_csr = vector.transfer_read %rd_csr[%c0], %f0 : tensor<4xf64>, vector<3xf64>
     vector.print %vd_csr : vector<3xf64>
 
+    // CHECK-NEXT: 3
+    vector.print %ld_csr : index
+
     %bod = tensor.empty() : tensor<6xf64>
     %bop = tensor.empty() : tensor<4xindex>
     %boi = tensor.empty() : tensor<6x2xindex>

@llvmbot
Copy link
Member

llvmbot commented Apr 9, 2024

@llvm/pr-subscribers-mlir

Author: Aart Bik (aartbik)

Changes

The doc and examples of the [dis]assemble operations did not reflect all the recent changes on order of the operands. Also clarified some of the text.


Full diff: https://github.com/llvm/llvm-project/pull/88213.diff

4 Files Affected:

  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td (+42-45)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+8-2)
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
index 5df8a176459b7c..b7baf2d81db1e0 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
@@ -61,34 +61,32 @@ def SparseTensor_AssembleOp : SparseTensor_Op<"assemble", [Pure]>,
   let summary = "Returns a sparse tensor assembled from the given values and levels";
 
   let description = [{
-    Assembles the values and per-level coordinate or postion arrays into a sparse tensor.
-    The order and types of provided levels must be consistent with the actual storage
-    layout of the returned sparse tensor described below.
+    Assembles the per-level position and coordinate arrays together with
+    the values arrays into a sparse tensor. The order and types of the
+    provided levels must be consistent with the actual storage layout of
+    the returned sparse tensor described below.
 
-    - `values : tensor<? x V>`
-      supplies the value for each stored element in the sparse tensor.
     - `levels: [tensor<? x iType>, ...]`
-      each supplies the sparse tensor coordinates scheme in the sparse tensor for
-      the corresponding level as specifed by `sparse_tensor::StorageLayout`.
-
-    This operation can be used to assemble a sparse tensor from external
-    sources; e.g., when passing two numpy arrays from Python.
-
-    Disclaimer: This is the user's responsibility to provide input that can be
-    correctly interpreted by the sparsifier, which does not perform
-    any sanity test during runtime to verify data integrity.
+      supplies the sparse tensor position and coordinate arrays
+      of the sparse tensor for the corresponding level as specifed by
+      `sparse_tensor::StorageLayout`.
+    - `values : tensor<? x V>`
+      supplies the values array for the stored elements in the sparse tensor.
 
-    TODO: The returned tensor is allowed (in principle) to have non-identity
-    dimOrdering/higherOrdering mappings.  However, the current implementation
-    does not yet support them.
+    This operation can be used to assemble a sparse tensor from an
+    external source; e.g., by passing numpy arrays from Python. It
+    is the user's responsibility to provide input that can be correctly
+    interpreted by the sparsifier, which does not perform any sanity
+    test to verify data integrity.
 
     Example:
 
     ```mlir
-    %values      = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
-    %coordinates = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
-    %st = sparse_tensor.assemble %values, %coordinates
-        : tensor<3xf64>, tensor<3x2xindex> to tensor<3x4xf64, #COO>
+    %pos    = arith.constant dense<[0, 3]>                : tensor<2xindex>
+    %index  = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
+    %values = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
+    %s = sparse_tensor.assemble (%pos, %index), %values
+       : (tensor<2xindex>, tensor<3x2xindex>), tensor<3xf64> to tensor<3x4xf64, #COO>
     // yields COO format |1.1, 0.0, 0.0, 0.0|
     //     of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
     //                   |0.0, 0.0, 0.0, 0.0|
@@ -96,8 +94,8 @@ def SparseTensor_AssembleOp : SparseTensor_Op<"assemble", [Pure]>,
   }];
 
   let assemblyFormat =
-    "` ` `(` $levels `)` `,` $values attr-dict"
-    " `:` `(` type($levels) `)` `,` type($values) `to` type($result)";
+    "` ` `(` $levels       `)` `,` $values attr-dict `:`"
+    "    `(` type($levels) `)` `,` type($values) `to` type($result)";
 
   let hasVerifier = 1;
 }
@@ -110,21 +108,20 @@ def SparseTensor_DisassembleOp : SparseTensor_Op<"disassemble", [Pure, SameVaria
                   TensorOf<[AnyType]>:$ret_values,
                   Variadic<AnyIndexingScalarLike>:$lvl_lens,
                   AnyIndexingScalarLike:$val_len)> {
-  let summary = "Returns the (values, coordinates) pair disassembled from the input tensor";
+  let summary = "Copies the values and levels of the given sparse tensor";
 
   let description = [{
     The disassemble operation is the inverse of `sparse_tensor::assemble`.
-    It returns the values and per-level position and coordinate array to the
-    user from the sparse tensor along with the actual length of the memory used
-    in each returned buffer. This operation can be used for returning an
-    disassembled MLIR sparse tensor to frontend; e.g., returning two numpy arrays
-    to Python.
-
-    Disclaimer: This is the user's responsibility to allocate large enough buffers
-    to hold the sparse tensor. The sparsifier simply copies each fields
-    of the sparse tensor into the user-supplied buffer without bound checking.
+    It copies the values and per-level position and coordinate arrays of
+    the given sparse tensor into the user-supplied buffers along with the
+    actual length of the memory used in each returned tensor.
 
-    TODO: the current implementation does not yet support non-identity mappings.
+    This operation can be used for returning a disassembled MLIR sparse tensor;
+    e.g., copying the sparse tensor contents into pre-allocated numpy arrays
+    back to Python. It is the user's responsibility to allocate large enough
+    buffers of the appropriate types to hold the sparse tensor contents.
+    The sparsifier simply copies all fields of the sparse tensor into the
+    user-supplied buffers without any sanity test to verify data integrity.
 
     Example:
 
@@ -132,26 +129,26 @@ def SparseTensor_DisassembleOp : SparseTensor_Op<"disassemble", [Pure, SameVaria
     // input COO format |1.1, 0.0, 0.0, 0.0|
     //    of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
     //                  |0.0, 0.0, 0.0, 0.0|
-    %v, %p, %c, %v_len, %p_len, %c_len =
-        sparse_tensor.disassemble %sp : tensor<3x4xf64, #COO>
-          out_lvls(%op, %oi) : tensor<2xindex>, tensor<3x2xindex>,
-          out_vals(%od) : tensor<3xf64> ->
-          tensor<3xf64>, (tensor<2xindex>, tensor<3x2xindex>), index, (index, index)
-    // %v = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
+    %p, %c, %v, %p_len, %c_len, %v_len =
+      sparse_tensor.disassemble %s : tensor<3x4xf64, #COO>
+         out_lvls(%op, %oi : tensor<2xindex>, tensor<3x2xindex>)
+         out_vals(%od : tensor<3xf64>) ->
+           (tensor<2xindex>, tensor<3x2xindex>), tensor<3xf64>, (index, index), index
     // %p = arith.constant dense<[ 0,              3 ]> : tensor<2xindex>
     // %c = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
-    // %v_len = 3
+    // %v = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
     // %p_len = 2
     // %c_len = 6 (3x2)
+    // %v_len = 3
     ```
   }];
 
   let assemblyFormat =
-    "$tensor `:` type($tensor) "
+    "$tensor attr-dict `:` type($tensor)"
     "`out_lvls` `(` $out_levels `:` type($out_levels) `)` "
-    "`out_vals` `(` $out_values `:` type($out_values) `)` attr-dict"
-    "`->` `(` type($ret_levels) `)` `,` type($ret_values) `,` "
-    "`(` type($lvl_lens) `)` `,` type($val_len)";
+    "`out_vals` `(` $out_values `:` type($out_values) `)` `->`"
+    "`(` type($ret_levels) `)` `,` type($ret_values) `,` "
+    "`(` type($lvl_lens)   `)` `,` type($val_len)";
 
   let hasVerifier = 1;
 }
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 18851f29d8eaa3..7f5c05190fc9a2 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -60,7 +60,7 @@ func.func @invalid_pack_mis_position(%values: tensor<6xf64>, %coordinates: tenso
 
 func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x1xi32>) {
   // expected-error@+1 {{input/output element-types don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100xf32, #SparseVector>
+  %rp, %rc, %rv, %pl, %cl, %vl = sparse_tensor.disassemble %sp : tensor<100xf32, #SparseVector>
                   out_lvls(%pos, %coordinates : tensor<2xi32>, tensor<6x1xi32>)
                   out_vals(%values : tensor<6xf64>)
                   -> (tensor<2xi32>, tensor<6x1xi32>), tensor<6xf64>, (index, index), index
@@ -73,7 +73,7 @@ func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: ten
 
 func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>) {
   // expected-error@+1 {{input/output trailing COO level-ranks don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100x2xf64, #SparseVector>
+  %rp, %rc, %rv, %pl, %cl, %vl = sparse_tensor.disassemble %sp : tensor<100x2xf64, #SparseVector>
                   out_lvls(%pos, %coordinates : tensor<2xi32>, tensor<6x3xi32> )
                   out_vals(%values : tensor<6xf64>)
                   -> (tensor<2xi32>, tensor<6x3xi32>), tensor<6xf64>, (index, index), index
@@ -86,7 +86,7 @@ func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: t
 
 func.func @invalid_unpack_mis_position(%sp: tensor<2x100xf64, #CSR>, %values: tensor<6xf64>, %coordinates: tensor<6xi32>) {
   // expected-error@+1 {{inconsistent number of fields between input/output}}
-  %rv, %rc, %vl, %pl = sparse_tensor.disassemble %sp : tensor<2x100xf64, #CSR>
+  %rc, %rv, %cl, %vl = sparse_tensor.disassemble %sp : tensor<2x100xf64, #CSR>
              out_lvls(%coordinates : tensor<6xi32>)
              out_vals(%values : tensor<6xf64>)
              -> (tensor<6xi32>), tensor<6xf64>, (index), index
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip.mlir b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
index a47a3d5119f96d..12f69c1d37b9cd 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
@@ -33,21 +33,21 @@ func.func @sparse_pack(%pos: tensor<2xi32>, %index: tensor<6x1xi32>, %data: tens
 #SparseVector = #sparse_tensor.encoding<{map = (d0) -> (d0 : compressed), crdWidth=32}>
 // CHECK-LABEL: func @sparse_unpack(
 //  CHECK-SAME: %[[T:.*]]: tensor<100xf64, #
-//  CHECK-SAME: %[[OD:.*]]: tensor<6xf64>
-//  CHECK-SAME: %[[OP:.*]]: tensor<2xindex>
-//  CHECK-SAME: %[[OI:.*]]: tensor<6x1xi32>
+//  CHECK-SAME: %[[OP:.*]]: tensor<2xindex>,
+//  CHECK-SAME: %[[OI:.*]]: tensor<6x1xi32>,
+//  CHECK-SAME: %[[OD:.*]]: tensor<6xf64>)
 //       CHECK: %[[P:.*]]:2, %[[D:.*]], %[[PL:.*]]:2, %[[DL:.*]] = sparse_tensor.disassemble %[[T]]
 //       CHECK: return %[[P]]#0, %[[P]]#1, %[[D]]
 func.func @sparse_unpack(%sp : tensor<100xf64, #SparseVector>,
-                         %od : tensor<6xf64>,
                          %op : tensor<2xindex>,
-                         %oi : tensor<6x1xi32>)
+                         %oi : tensor<6x1xi32>,
+                         %od : tensor<6xf64>)
                        -> (tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>) {
-  %rp, %ri, %rd, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100xf64, #SparseVector>
+  %rp, %ri, %d, %rpl, %ril, %dl = sparse_tensor.disassemble %sp : tensor<100xf64, #SparseVector>
                   out_lvls(%op, %oi : tensor<2xindex>, tensor<6x1xi32>)
                   out_vals(%od : tensor<6xf64>)
                   -> (tensor<2xindex>, tensor<6x1xi32>), tensor<6xf64>, (index, index), index
-  return %rp, %ri, %rd : tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>
+  return %rp, %ri, %d : tensor<2xindex>, tensor<6x1xi32>, tensor<6xf64>
 }
 
 // -----
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
index 7ecccad212cdbe..5415625ff05d6d 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
@@ -231,7 +231,7 @@ module {
     %od = tensor.empty() : tensor<3xf64>
     %op = tensor.empty() : tensor<2xi32>
     %oi = tensor.empty() : tensor<3x2xi32>
-    %p, %i, %d, %dl, %pl, %il = sparse_tensor.disassemble %s5 : tensor<10x10xf64, #SortedCOOI32>
+    %p, %i, %d, %pl, %il, %dl = sparse_tensor.disassemble %s5 : tensor<10x10xf64, #SortedCOOI32>
                  out_lvls(%op, %oi : tensor<2xi32>, tensor<3x2xi32>)
                  out_vals(%od : tensor<3xf64>)
                  -> (tensor<2xi32>, tensor<3x2xi32>), tensor<3xf64>, (i32, i64), index
@@ -244,10 +244,13 @@ module {
     %vi = vector.transfer_read %i[%c0, %c0], %i0 : tensor<3x2xi32>, vector<3x2xi32>
     vector.print %vi : vector<3x2xi32>
 
+    // CHECK-NEXT: 3
+    vector.print %dl : index
+
     %d_csr = tensor.empty() : tensor<4xf64>
     %p_csr = tensor.empty() : tensor<3xi32>
     %i_csr = tensor.empty() : tensor<3xi32>
-    %rp_csr, %ri_csr, %rd_csr, %ld_csr, %lp_csr, %li_csr = sparse_tensor.disassemble %csr : tensor<2x2xf64, #CSR>
+    %rp_csr, %ri_csr, %rd_csr, %lp_csr, %li_csr, %ld_csr = sparse_tensor.disassemble %csr : tensor<2x2xf64, #CSR>
                  out_lvls(%p_csr, %i_csr : tensor<3xi32>, tensor<3xi32>)
                  out_vals(%d_csr : tensor<4xf64>)
                  -> (tensor<3xi32>, tensor<3xi32>), tensor<4xf64>, (i32, i64), index
@@ -256,6 +259,9 @@ module {
     %vd_csr = vector.transfer_read %rd_csr[%c0], %f0 : tensor<4xf64>, vector<3xf64>
     vector.print %vd_csr : vector<3xf64>
 
+    // CHECK-NEXT: 3
+    vector.print %ld_csr : index
+
     %bod = tensor.empty() : tensor<6xf64>
     %bop = tensor.empty() : tensor<4xindex>
     %boi = tensor.empty() : tensor<6x2xindex>

@aartbik aartbik merged commit f388a3a into llvm:main Apr 10, 2024
4 checks passed
@aartbik aartbik deleted the bik branch April 10, 2024 16:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants