Skip to content

Conversation

srcarroll
Copy link
Contributor

@srcarroll srcarroll commented Feb 11, 2024

A transform.structured.flatten_elementwise op is implemented for flattening the iteration space and (applicable) operaands/results to a single dimension.

@MaheshRavishankar
Copy link
Contributor

This seems to be a duplicate of

collapseOpIterationDims(LinalgType op,

@srcarroll
Copy link
Contributor Author

Ah interesting. I wasn't aware of this. I wouldn't say this whole PR is a duplicate though. The purpose is to implement a transform op. I can refactor my changes to reuse collapseOpIterationDims but right now it only supports collapsing linalg.generic and linalg.copy. But i could refactor that function to cover more if you think that's a good idea

@MaheshRavishankar
Copy link
Contributor

The transformation should work on any LinalgOp. Anything that works on linalg.generic would directly work on LinalgOp you just need to change the type of operation from GenericOp to LinalgOp and it should just work.

@srcarroll
Copy link
Contributor Author

srcarroll commented Feb 14, 2024

Yes I understand. What I'm saying is that this is just a helper function, not a transform op, as in transform.structured.flatten. This PR defines such a transform op. I do agree that the helper function can be refactored and reused by my op implementation. But I'm just pointing out that it does indeed need a refactor for me to use it.

@srcarroll
Copy link
Contributor Author

Yes I understand. What I'm saying is that this is just a helper function, not a transform op, as in transform.structured.flatten. This PR defines such a transform op. I do agree that this can be refactored and reused by my op implementation. But I'm just pointing out that it does indeed need a refactor for me to use it. And I'm happy to do so, to be clear

@MaheshRavishankar
Copy link
Contributor

Yes I understand. What I'm saying is that this is just a helper function, not a transform op, as in transform.structured.flatten. This PR defines such a transform op. I do agree that this can be refactored and reused by my op implementation. But I'm just pointing out that it does indeed need a refactor for me to use it. And I'm happy to do so, to be clear

Sounds good. I am not sure how much refactoring is needed. This entry point has been used downstream without too many issues, but happy to see what you want to do. If it does change, this would be a breaking change.

@srcarroll srcarroll force-pushed the linalg-elementwise-flatten-transform branch from 9d0b35a to 780394c Compare February 14, 2024 05:23
@srcarroll
Copy link
Contributor Author

srcarroll commented Feb 14, 2024

I made a first attempt at a refactor and it wasn't so bad. i tried to make the least changes possible but its still a wip and will appreciate feedback on the design aspects.

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Just a few comments, but this looks along the right path.

// Get the iterator types for the operand.
SmallVector<utils::IteratorType> iteratorTypes =
getCollapsedOpIteratorTypes(op.getIteratorTypesArray(), collapsingInfo);
Operation *collapsedOp = clone(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the operation being cloned?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow up.. I see why you are cloning it.... This is an interesting way of doing it. Does it work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does.


// TODO: Find a more general way to determine if op requires explicit
// indexing_maps and iterator_types
if (isa<linalg::GenericOp>(op)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think you need to do this. All LinalgOps have getIteratorTypesArray (or should have). Also if you dont clone the op you dont need to set the indexing maps etc. explicitly....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yah i know all LinalgOps have these attrs and the corresponding getters. But most of the structured ops don't have an explicit attr for indexing maps and iterator types. so if i unconditionally try to set them, i will end up with named ops with additional attrs that aren't at all associated with the implicit ones. I'm sure there's a way to do this correctly. I just dont know.

I'm not sure how any of this would work without cloning the op (other than having a switch statement checking every single linalg op that exists). Of course the alternative is to just always convert to a linalg.generic op. But I dont like that. I much prefer keeping the named op and that's why I chose to clone.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm. there is this kMemoizedIndexingMapsAttrName that's for named ops. so i could check if the op has this and then replace it. otherwise replace the expicit indexing maps. but i dont see a similar attr name for iterator types

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it woudl be nice if there was a general setIndexingMaps and setIteratoryTypes for arbitrary LinalgOps

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One alternative is that you could have a interface method to LinalgOp that allows you to clone with a given indexing_maps and iterator_types and just uses the region from the original operation. Each specific op could just implement its own version (including linalg.generic).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a bad idea. Id have to see what it looks like to understand the implications fully but I sense that it might be too demanding to require all ops to implement that. Would this work for the ops generated from core_named_ops.py (can't remember the exact file name right now)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To start with you can just have the method return a LogicalResult and the default implementation return failure()and only add the implementation for ops that you want to support. Those can be filled in over time.

On second thought maybe instead of going full interface method, just add a templated method in this file for cloning? The default can be for handling named ops, and the generic op can get its specialization

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. I wont be able to get back to this until this weekend or next, but I'll take a look at your suggestion and keep you updated.

// Get the iterator types for the operand.
SmallVector<utils::IteratorType> iteratorTypes =
getCollapsedOpIteratorTypes(op.getIteratorTypesArray(), collapsingInfo);
if (op->hasAttr("indexing_maps")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems iffy to me. THis is looking into internal implementation details of the LinalgOp representation in C++ and is hard to maintain. Let me give you an example. I have no idea what the indexing maps or iterator types are stored as. I just use the utility methods on ops to get this information. Side-stepping that can introduce silent bugs if something changes in the implementation of the operation.

I understand you are trying to generalize these things.. One way to do that would be to define a method

template <typename OpTy>
Operation * cloneWithIndexingMapsIteratorTypesAndOperands(RewriterBase &rewriter, OpTy origOp, TypeRange resultTypes, ArrayRef<AffineMap> indexingMaps, ArrayRef<StringRef> iteratorType, ValueRange inputOperands, ValueRange outputOperands) {
  return nullptr;
}

template <>
Operation *cloneWithIndexingMapsIteratorTypesAndOperands<LinalgOp>(RewriterBase &rewriter, LinalgOp origOp, TypeRange resultTypes, ArrayRef<AffineMap> indexingMaps, ArrayRef<StringRef> iteratorType, ValueRange inputOperands, ValueRange outputOperands) {
 return clone(rewriter, origOp, resultTypes, inputOperands, outputOperands);
}

template <>
Operation *cloneWithIndexingMapsIteratorTypesAndOperands<GenericOp>(RewriterBase &rewriter, GenericOp origOp, TypeRange resultTypes, ArrayRef<AffineMap> indexingMaps, ArrayRef<StringRef> iteratorType, ValueRange inputOperands, ValueRange outputOperands) {
SmallVector<utils::IteratorType> iteratorTypes =
      getCollapsedOpIteratorTypes(op.getIteratorTypesArray(), collapsingInfo);

  // Get the indexing maps.
  auto indexingMaps =
      llvm::map_to_vector(op.getIndexingMapsArray(), [&](AffineMap map) {
        return getCollapsedOpIndexingMap(map, collapsingInfo);
      });

  Operation *collapsedOp = rewriter.create<linalg::GenericOp>(
      loc, resultTypes, inputOperands, outputOperands, indexingMaps,
      iteratorTypes, [](OpBuilder &builder, Location loc, ValueRange args) {});
  Block *origOpBlock = &op->getRegion(0).front();
  Block *collapsedOpBlock = &collapsedOp->getRegion(0).front();
  rewriter.mergeBlocks(origOpBlock, collapsedOpBlock,
                       collapsedOpBlock->getArguments());
}

Then you can

if (auto genericOp = dyn_cast<GenericOp>(op)) {
  cloneWithIndexingMapsIteratorTypesAndOperands(rewriter, genericOp, ....)
} else {
  cloneWithIndexingMapsIteratorTypesAndOperands(rewriter, cast<LinalgOp>(op), ...)
}

I think that gives you what you want and doesnt leak internal implementation details of the op.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yah i totally agree this recent change is garbage. shouldn't have even pushed it :).

i'm going to go with your suggestion. however, just so i understand correctly, this is functionally equivalent to what i had before (checking if generic), but you are suggesting this for cleanliness and maintainability?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup. We dont need to look at whether an attribute is present or not.. So a bit more cleaner?

Copy link
Contributor Author

@srcarroll srcarroll Feb 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed. do you think these cloneWithIndexingMapsIteratorTypesAndOperands should be static functions specific to this file? or as part of some lib?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i pushed a modified version of your suggestion. First, i templated the input and output op to enforce that the cloning should be same named op. Second, i didn't want to have both collapsed operands and collapsingInfo as arguments to a function because that leaves room for inconsistency. And the collapsingInfo is needed for the generic's indexing map. So I did more refactoring with that in mind.

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Thanks a lot!

@srcarroll srcarroll marked this pull request as ready for review February 28, 2024 07:40
@srcarroll srcarroll merged commit b6f4dd9 into llvm:main Feb 28, 2024
@srcarroll srcarroll deleted the linalg-elementwise-flatten-transform branch June 5, 2024 02:59
mylai-mtk pushed a commit to mylai-mtk/llvm-project that referenced this pull request Jul 12, 2024
…llvm#81431)

A `transform.structured.flatten_elementwise` op is implemented for
flattening the iteration space and (applicable) operands/results to a
single dimension.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants