Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decompose aten.fmod into aten.mul,sub,div etc. #3689

Merged
merged 3 commits into from
Sep 9, 2024

Conversation

srinathava
Copy link
Contributor

@srinathava srinathava commented Sep 5, 2024

As titled, create a new decomposition for aten.fmod.Tensor to aten.div, aten.trunc, aten.mul and aten.sub. Note that we only use aten.trunc for floating point operations. This further gets decomposed to aten.where etc. by other existing decompositions.

This decomposition now makes TOSA pass for a simple model with aten.fmod while it makes stablehlo fail. For now, we disallow this decomposition for stablehlo

@srinathava srinathava marked this pull request as ready for review September 8, 2024 15:16
Copy link
Member

@sjain-stanford sjain-stanford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sjain-stanford sjain-stanford merged commit 0a788e0 into llvm:main Sep 9, 2024
3 checks passed
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srinathava Since this commit adds the decomposition for the AtenFmodTensorOp, the existing lowering for the same should be removed from here:

if (auto fmod = dyn_cast<AtenFmodTensorOp>(op)) {
Type newResultType =
cast<RankedTensorType>(converter->convertType(fmod.getType()))
.getElementType();
Value self = convertScalarToDtype(b, loc, payloadArgs[0], newResultType);
Value other = convertScalarToDtype(b, loc, payloadArgs[1], newResultType);
Value result;
if (isa<mlir::FloatType>(newResultType)) {
Value n = b.create<arith::DivFOp>(loc, self, other);
n = b.create<math::TruncOp>(loc, n);
Value n_y = b.create<arith::MulFOp>(loc, n, other);
result = b.create<arith::SubFOp>(loc, self, n_y);
} else if (isa<mlir::IntegerType>(newResultType)) {
Value n = b.create<arith::DivSIOp>(loc, self, other);
Value n_y = b.create<arith::MulIOp>(loc, n, other);
result = b.create<arith::SubIOp>(loc, self, n_y);
} else {
fmod.emitError("Unsupported type encountered for AtenFmodTensorOp.");
}
return result;
}

@srinathava
Copy link
Contributor Author

@srinathava Since this commit adds the decomposition for the AtenFmodTensorOp, the existing lowering for the same should be removed from here:

if (auto fmod = dyn_cast<AtenFmodTensorOp>(op)) {
Type newResultType =
cast<RankedTensorType>(converter->convertType(fmod.getType()))
.getElementType();
Value self = convertScalarToDtype(b, loc, payloadArgs[0], newResultType);
Value other = convertScalarToDtype(b, loc, payloadArgs[1], newResultType);
Value result;
if (isa<mlir::FloatType>(newResultType)) {
Value n = b.create<arith::DivFOp>(loc, self, other);
n = b.create<math::TruncOp>(loc, n);
Value n_y = b.create<arith::MulFOp>(loc, n, other);
result = b.create<arith::SubFOp>(loc, self, n_y);
} else if (isa<mlir::IntegerType>(newResultType)) {
Value n = b.create<arith::DivSIOp>(loc, self, other);
Value n_y = b.create<arith::MulIOp>(loc, n, other);
result = b.create<arith::SubIOp>(loc, self, n_y);
} else {
fmod.emitError("Unsupported type encountered for AtenFmodTensorOp.");
}
return result;
}

@vivekkhandelwal1, thanks for pointing that out. I'll send out a follow-up PR for the cleanup shortly.

sjain-stanford pushed a commit that referenced this pull request Sep 13, 2024
Follow up cleanup for [this
PR](#3689), which introduced a
decomposition for `aten.fmod.Tensor`. This means that the lowering for
this operator in linalg is no longer needed.

Thanks to @vivekkhandelwal1 for pointing this out.

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants