Skip to content

[mlir][LLVM] Delete getFixedVectorType and getScalableVectorType #135051

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 10, 2025

Conversation

matthias-springer
Copy link
Member

@matthias-springer matthias-springer commented Apr 9, 2025

The LLVM dialect no longer has its own vector types. It uses mlir::VectorType everywhere. Remove LLVM::getFixedVectorType/getScalableVectorType and use VectorType::get instead. This commit addresses a comment on the PR that deleted the LLVM vector types.

@llvmbot
Copy link
Member

llvmbot commented Apr 9, 2025

@llvm/pr-subscribers-mlir-gpu

@llvm/pr-subscribers-mlir-nvgpu

Author: Matthias Springer (matthias-springer)

Changes

The LLVM dialect no longer has its own vector types. It uses mlir::VectorType everywhere. Remove LLVM::getFixedVectorType/getScalableVectorType and use VectorType::get instead. This commit addresses a comment on the PR that deleted the LLVM vector types.

Depends on #134981.


Full diff: https://github.com/llvm/llvm-project/pull/135051.diff

7 Files Affected:

  • (modified) mlir/docs/Dialects/LLVM.md (-4)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h (-8)
  • (modified) mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp (+16-17)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp (-12)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp (+14-9)
  • (modified) mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp (+10-14)
  • (modified) mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp (+5-4)
diff --git a/mlir/docs/Dialects/LLVM.md b/mlir/docs/Dialects/LLVM.md
index 468f69c419071..4b5d518ca4eab 100644
--- a/mlir/docs/Dialects/LLVM.md
+++ b/mlir/docs/Dialects/LLVM.md
@@ -336,10 +336,6 @@ compatible with the LLVM dialect:
     vector type compatible with the LLVM dialect;
 -   `llvm::ElementCount LLVM::getVectorNumElements(Type)` - returns the number
     of elements in any vector type compatible with the LLVM dialect;
--   `Type LLVM::getFixedVectorType(Type, unsigned)` - gets a fixed vector type
-    with the given element type and size; the resulting type is either a
-    built-in or an LLVM dialect vector type depending on which one supports the
-    given element type.
 
 #### Examples of Compatible Vector Types
 
diff --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
index a2a76c49a2bda..17561f79d135a 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
@@ -126,14 +126,6 @@ Type getVectorType(Type elementType, unsigned numElements,
 /// and length.
 Type getVectorType(Type elementType, const llvm::ElementCount &numElements);
 
-/// Creates an LLVM dialect-compatible type with the given element type and
-/// length.
-Type getFixedVectorType(Type elementType, unsigned numElements);
-
-/// Creates an LLVM dialect-compatible type with the given element type and
-/// length.
-Type getScalableVectorType(Type elementType, unsigned numElements);
-
 /// Returns the size of the given primitive LLVM dialect-compatible type
 /// (including vectors) in bits, for example, the size of i16 is 16 and
 /// the size of vector<4xi16> is 64. Returns 0 for non-primitive
diff --git a/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp b/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
index 51507c6507b69..69fa62c8196e4 100644
--- a/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
+++ b/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
@@ -61,13 +61,13 @@ static Value truncToI32(ImplicitLocOpBuilder &b, Value value) {
 static Type inferIntrinsicResultType(Type vectorResultType) {
   MLIRContext *ctx = vectorResultType.getContext();
   auto a = cast<LLVM::LLVMArrayType>(vectorResultType);
-  auto f16x2Ty = LLVM::getFixedVectorType(Float16Type::get(ctx), 2);
+  auto f16x2Ty = VectorType::get(2, Float16Type::get(ctx));
   auto i32Ty = IntegerType::get(ctx, 32);
-  auto i32x2Ty = LLVM::getFixedVectorType(i32Ty, 2);
+  auto i32x2Ty = VectorType::get(2, i32Ty);
   Type f64Ty = Float64Type::get(ctx);
-  Type f64x2Ty = LLVM::getFixedVectorType(f64Ty, 2);
+  Type f64x2Ty = VectorType::get(2, f64Ty);
   Type f32Ty = Float32Type::get(ctx);
-  Type f32x2Ty = LLVM::getFixedVectorType(f32Ty, 2);
+  Type f32x2Ty = VectorType::get(2, f32Ty);
   if (a.getElementType() == f16x2Ty) {
     return LLVM::LLVMStructType::getLiteral(
         ctx, SmallVector<Type>(a.getNumElements(), f16x2Ty));
@@ -85,7 +85,7 @@ static Type inferIntrinsicResultType(Type vectorResultType) {
         ctx,
         SmallVector<Type>(static_cast<size_t>(a.getNumElements()) * 2, f32Ty));
   }
-  if (a.getElementType() == LLVM::getFixedVectorType(f32Ty, 1)) {
+  if (a.getElementType() == VectorType::get(1, f32Ty)) {
     return LLVM::LLVMStructType::getLiteral(
         ctx, SmallVector<Type>(static_cast<size_t>(a.getNumElements()), f32Ty));
   }
@@ -106,11 +106,11 @@ static Value convertIntrinsicResult(Location loc, Type intrinsicResultType,
   Type i32Ty = rewriter.getI32Type();
   Type f32Ty = rewriter.getF32Type();
   Type f64Ty = rewriter.getF64Type();
-  Type f16x2Ty = LLVM::getFixedVectorType(rewriter.getF16Type(), 2);
-  Type i32x2Ty = LLVM::getFixedVectorType(i32Ty, 2);
-  Type f64x2Ty = LLVM::getFixedVectorType(f64Ty, 2);
-  Type f32x2Ty = LLVM::getFixedVectorType(f32Ty, 2);
-  Type f32x1Ty = LLVM::getFixedVectorType(f32Ty, 1);
+  Type f16x2Ty = VectorType::get(2, rewriter.getF16Type());
+  Type i32x2Ty = VectorType::get(2, i32Ty);
+  Type f64x2Ty = VectorType::get(2, f64Ty);
+  Type f32x2Ty = VectorType::get(2, f32Ty);
+  Type f32x1Ty = VectorType::get(1, f32Ty);
 
   auto makeConst = [&](int32_t index) -> Value {
     return rewriter.create<LLVM::ConstantOp>(loc, IntegerType::get(ctx, 32),
@@ -181,9 +181,9 @@ static SmallVector<Value> unpackOperandVector(ImplicitLocOpBuilder &b,
   Type f64Ty = b.getF64Type();
   Type f32Ty = b.getF32Type();
   Type i64Ty = b.getI64Type();
-  Type i8x4Ty = LLVM::getFixedVectorType(b.getI8Type(), 4);
-  Type i4x8Ty = LLVM::getFixedVectorType(b.getIntegerType(4), 8);
-  Type f32x1Ty = LLVM::getFixedVectorType(f32Ty, 1);
+  Type i8x4Ty = VectorType::get(4, b.getI8Type());
+  Type i4x8Ty = VectorType::get(8, b.getIntegerType(4));
+  Type f32x1Ty = VectorType::get(1, f32Ty);
   auto arrayTy = cast<LLVM::LLVMArrayType>(operand.getType());
 
   for (unsigned i = 0, e = arrayTy.getNumElements(); i < e; ++i) {
@@ -268,8 +268,8 @@ struct MmaLdMatrixOpToNVVM : public ConvertOpToLLVMPattern<nvgpu::LdMatrixOp> {
     if (!vectorResultType) {
       return failure();
     }
-    Type innerVectorType = LLVM::getFixedVectorType(
-        vectorResultType.getElementType(), vectorResultType.getDimSize(1));
+    Type innerVectorType = VectorType::get(vectorResultType.getDimSize(1),
+                                           vectorResultType.getElementType());
 
     int64_t num32BitRegs = vectorResultType.getDimSize(0);
 
@@ -627,8 +627,7 @@ struct NVGPUMmaSparseSyncLowering
 
     // Bitcast the sparse metadata from vector<2xf16> to an i32.
     Value sparseMetadata = adaptor.getSparseMetadata();
-    if (sparseMetadata.getType() !=
-        LLVM::getFixedVectorType(rewriter.getI16Type(), 2))
+    if (sparseMetadata.getType() != VectorType::get(2, rewriter.getI16Type()))
       return op->emitOpError() << "Expected metadata type to be LLVM "
                                   "VectorType of 2 i16 elements";
     sparseMetadata =
diff --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
index b3c2a29309528..29cf38c1fefea 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
@@ -851,18 +851,6 @@ Type mlir::LLVM::getVectorType(Type elementType,
                        /*isScalable=*/false);
 }
 
-Type mlir::LLVM::getFixedVectorType(Type elementType, unsigned numElements) {
-  assert(VectorType::isValidElementType(elementType) &&
-         "incompatible element type");
-  return VectorType::get(numElements, elementType);
-}
-
-Type mlir::LLVM::getScalableVectorType(Type elementType, unsigned numElements) {
-  // LLVM vectors are always 1-D, hence only 1 bool is required to mark it as
-  // scalable/non-scalable.
-  return VectorType::get(numElements, elementType, /*scalableDims=*/true);
-}
-
 llvm::TypeSize mlir::LLVM::getPrimitiveTypeSizeInBits(Type type) {
   assert(isCompatibleType(type) &&
          "expected a type compatible with the LLVM dialect");
diff --git a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
index 09bff6101edd3..b9d6952f67671 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
@@ -144,7 +144,7 @@ LogicalResult BulkStoreOp::verify() {
 std::optional<mlir::NVVM::MMATypes>
 MmaOp::inferOperandMMAType(Type operandElType, bool isAccumulator) {
   auto half2Type =
-      LLVM::getFixedVectorType(Float16Type::get(operandElType.getContext()), 2);
+      VectorType::get(2, Float16Type::get(operandElType.getContext()));
   if (operandElType.isF64())
     return NVVM::MMATypes::f64;
   if (operandElType.isF16() || operandElType == half2Type)
@@ -243,7 +243,8 @@ void MmaOp::print(OpAsmPrinter &p) {
   p.printOptionalAttrDict(this->getOperation()->getAttrs(), ignoreAttrNames);
 
   // Print the types of the operands and result.
-  p << " : " << "(";
+  p << " : "
+    << "(";
   llvm::interleaveComma(SmallVector<Type, 3>{frags[0].regs[0].getType(),
                                              frags[1].regs[0].getType(),
                                              frags[2].regs[0].getType()},
@@ -404,7 +405,7 @@ LogicalResult MmaOp::verify() {
   MLIRContext *context = getContext();
   auto f16Ty = Float16Type::get(context);
   auto i32Ty = IntegerType::get(context, 32);
-  auto f16x2Ty = LLVM::getFixedVectorType(f16Ty, 2);
+  auto f16x2Ty = VectorType::get(2, f16Ty);
   auto f32Ty = Float32Type::get(context);
   auto f16x2x4StructTy = LLVM::LLVMStructType::getLiteral(
       context, {f16x2Ty, f16x2Ty, f16x2Ty, f16x2Ty});
@@ -506,7 +507,7 @@ LogicalResult MmaOp::verify() {
       expectedA.emplace_back(1, f64Ty);
       expectedB.emplace_back(1, f64Ty);
       expectedC.emplace_back(2, f64Ty);
-      // expectedC.emplace_back(1, LLVM::getFixedVectorType(f64Ty, 2));
+      // expectedC.emplace_back(1, VectorType::get(2, f64Ty));
       expectedResult.emplace_back(LLVM::LLVMStructType::getLiteral(
           context, SmallVector<Type>(2, f64Ty)));
       allowedShapes.push_back({8, 8, 4});
@@ -992,7 +993,9 @@ std::string NVVM::WgmmaMmaAsyncOp::getPtx() {
   ss << "},";
   // Need to map read/write registers correctly.
   regCnt = (regCnt * 2);
-  ss << " $" << (regCnt) << "," << " $" << (regCnt + 1) << "," << " p";
+  ss << " $" << (regCnt) << ","
+     << " $" << (regCnt + 1) << ","
+     << " p";
   if (getTypeD() != WGMMATypes::s32) {
     ss << ", $" << (regCnt + 3) << ",  $" << (regCnt + 4);
   }
@@ -1219,7 +1222,7 @@ llvm::Intrinsic::ID CpAsyncBulkTensorPrefetchOp::getIntrinsicID(int tensorDims,
             : CP_ASYNC_BULK_TENSOR_REDUCE_MODE(op, dim, tile)
 
 #define GET_CP_ASYNC_BULK_TENSOR_ID(op, dims, is_im2col)                       \
-  [&]() -> auto {                                                              \
+  [&]() -> auto{                                                               \
     switch (dims) {                                                            \
     case 1:                                                                    \
       return CP_ASYNC_BULK_TENSOR_REDUCE_MODE(op, 1, tile);                    \
@@ -1234,7 +1237,8 @@ llvm::Intrinsic::ID CpAsyncBulkTensorPrefetchOp::getIntrinsicID(int tensorDims,
     default:                                                                   \
       llvm_unreachable("Invalid TensorDim in CpAsyncBulkTensorReduceOp.");     \
     }                                                                          \
-  }()
+  }                                                                            \
+  ()
 
 llvm::Intrinsic::ID CpAsyncBulkTensorReduceOp::getIntrinsicID(
     int tensorDims, NVVM::TMAReduxKind kind, bool isIm2Col) {
@@ -1364,13 +1368,14 @@ Tcgen05CommitOp::getIntrinsicIDAndArgs(Operation &op,
           : TCGEN05_CP_IMPL(shape_mc, src_fmt, _cg1)
 
 #define GET_TCGEN05_CP_ID(shape_mc, src_fmt, is_2cta)                          \
-  [&]() -> auto {                                                              \
+  [&]() -> auto{                                                               \
     if (src_fmt == Tcgen05CpSrcFormat::B6x16_P32)                              \
       return TCGEN05_CP_2CTA(shape_mc, _b6x16_p32, is_2cta);                   \
     if (src_fmt == Tcgen05CpSrcFormat::B4x16_P64)                              \
       return TCGEN05_CP_2CTA(shape_mc, _b4x16_p64, is_2cta);                   \
     return TCGEN05_CP_2CTA(shape_mc, , is_2cta);                               \
-  }()
+  }                                                                            \
+  ()
 
 llvm::Intrinsic::ID Tcgen05CpOp::getIntrinsicID(Operation &op) {
   auto curOp = cast<NVVM::Tcgen05CpOp>(op);
diff --git a/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp b/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
index 39cca7d363e0d..e80360aa08ed5 100644
--- a/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
+++ b/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
@@ -103,16 +103,15 @@ nvgpu::getMmaSyncRegisterType(const WarpMatrixInfo &type) {
 
   Type elType = type.vectorType.getElementType();
   if (elType.isF16()) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(Float16Type::get(ctx), 2), 2, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(2, Float16Type::get(ctx)), 2, 32,
+                               inferNumRegistersPerMatrixFragment(type)};
   }
 
   // f64 operand
   Type f64Ty = Float64Type::get(ctx);
   if (elType.isF64()) {
     return isAccum
-               ? FragmentElementInfo{LLVM::getFixedVectorType(f64Ty, 2), 2, 128,
+               ? FragmentElementInfo{VectorType::get(2, f64Ty), 2, 128,
                                      inferNumRegistersPerMatrixFragment(type)}
                : FragmentElementInfo{f64Ty, 1, 64,
                                      inferNumRegistersPerMatrixFragment(type)};
@@ -120,30 +119,27 @@ nvgpu::getMmaSyncRegisterType(const WarpMatrixInfo &type) {
 
   // int8 operand
   if (elType.isInteger(8)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 8), 4), 4, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(4, IntegerType::get(ctx, 8)), 4,
+                               32, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // int4 operand
   if (elType.isInteger(4)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 4), 8), 8, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(8, IntegerType::get(ctx, 4)), 8,
+                               32, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // Integer 32bit acc operands
   if (elType.isInteger(32)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 32), 2), 2, 64,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(2, IntegerType::get(ctx, 32)), 2,
+                               64, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // Floating point 32bit operands
   if (elType.isF32()) {
     Type f32Ty = Float32Type::get(ctx);
     return isAccum
-               ? FragmentElementInfo{LLVM::getFixedVectorType(f32Ty, 2), 2, 64,
+               ? FragmentElementInfo{VectorType::get(2, f32Ty), 2, 64,
                                      inferNumRegistersPerMatrixFragment(type)}
                : FragmentElementInfo{f32Ty, 1, 32,
                                      inferNumRegistersPerMatrixFragment(type)};
diff --git a/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp b/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
index bc9765fff2953..c46aa3e80d51a 100644
--- a/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
+++ b/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
@@ -124,14 +124,15 @@ class TypeFromLLVMIRTranslatorImpl {
 
   /// Translates the given fixed-vector type.
   Type translate(llvm::FixedVectorType *type) {
-    return LLVM::getFixedVectorType(translateType(type->getElementType()),
-                                    type->getNumElements());
+    return VectorType::get(type->getNumElements(),
+                           translateType(type->getElementType()));
   }
 
   /// Translates the given scalable-vector type.
   Type translate(llvm::ScalableVectorType *type) {
-    return LLVM::getScalableVectorType(translateType(type->getElementType()),
-                                       type->getMinNumElements());
+    return VectorType::get(type->getMinNumElements(),
+                           translateType(type->getElementType()),
+                           /*scalable=*/true);
   }
 
   /// Translates the given target extension type.

@llvmbot
Copy link
Member

llvmbot commented Apr 9, 2025

@llvm/pr-subscribers-mlir-llvm

Author: Matthias Springer (matthias-springer)

Changes

The LLVM dialect no longer has its own vector types. It uses mlir::VectorType everywhere. Remove LLVM::getFixedVectorType/getScalableVectorType and use VectorType::get instead. This commit addresses a comment on the PR that deleted the LLVM vector types.

Depends on #134981.


Full diff: https://github.com/llvm/llvm-project/pull/135051.diff

7 Files Affected:

  • (modified) mlir/docs/Dialects/LLVM.md (-4)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h (-8)
  • (modified) mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp (+16-17)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp (-12)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp (+14-9)
  • (modified) mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp (+10-14)
  • (modified) mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp (+5-4)
diff --git a/mlir/docs/Dialects/LLVM.md b/mlir/docs/Dialects/LLVM.md
index 468f69c419071..4b5d518ca4eab 100644
--- a/mlir/docs/Dialects/LLVM.md
+++ b/mlir/docs/Dialects/LLVM.md
@@ -336,10 +336,6 @@ compatible with the LLVM dialect:
     vector type compatible with the LLVM dialect;
 -   `llvm::ElementCount LLVM::getVectorNumElements(Type)` - returns the number
     of elements in any vector type compatible with the LLVM dialect;
--   `Type LLVM::getFixedVectorType(Type, unsigned)` - gets a fixed vector type
-    with the given element type and size; the resulting type is either a
-    built-in or an LLVM dialect vector type depending on which one supports the
-    given element type.
 
 #### Examples of Compatible Vector Types
 
diff --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
index a2a76c49a2bda..17561f79d135a 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
@@ -126,14 +126,6 @@ Type getVectorType(Type elementType, unsigned numElements,
 /// and length.
 Type getVectorType(Type elementType, const llvm::ElementCount &numElements);
 
-/// Creates an LLVM dialect-compatible type with the given element type and
-/// length.
-Type getFixedVectorType(Type elementType, unsigned numElements);
-
-/// Creates an LLVM dialect-compatible type with the given element type and
-/// length.
-Type getScalableVectorType(Type elementType, unsigned numElements);
-
 /// Returns the size of the given primitive LLVM dialect-compatible type
 /// (including vectors) in bits, for example, the size of i16 is 16 and
 /// the size of vector<4xi16> is 64. Returns 0 for non-primitive
diff --git a/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp b/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
index 51507c6507b69..69fa62c8196e4 100644
--- a/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
+++ b/mlir/lib/Conversion/NVGPUToNVVM/NVGPUToNVVM.cpp
@@ -61,13 +61,13 @@ static Value truncToI32(ImplicitLocOpBuilder &b, Value value) {
 static Type inferIntrinsicResultType(Type vectorResultType) {
   MLIRContext *ctx = vectorResultType.getContext();
   auto a = cast<LLVM::LLVMArrayType>(vectorResultType);
-  auto f16x2Ty = LLVM::getFixedVectorType(Float16Type::get(ctx), 2);
+  auto f16x2Ty = VectorType::get(2, Float16Type::get(ctx));
   auto i32Ty = IntegerType::get(ctx, 32);
-  auto i32x2Ty = LLVM::getFixedVectorType(i32Ty, 2);
+  auto i32x2Ty = VectorType::get(2, i32Ty);
   Type f64Ty = Float64Type::get(ctx);
-  Type f64x2Ty = LLVM::getFixedVectorType(f64Ty, 2);
+  Type f64x2Ty = VectorType::get(2, f64Ty);
   Type f32Ty = Float32Type::get(ctx);
-  Type f32x2Ty = LLVM::getFixedVectorType(f32Ty, 2);
+  Type f32x2Ty = VectorType::get(2, f32Ty);
   if (a.getElementType() == f16x2Ty) {
     return LLVM::LLVMStructType::getLiteral(
         ctx, SmallVector<Type>(a.getNumElements(), f16x2Ty));
@@ -85,7 +85,7 @@ static Type inferIntrinsicResultType(Type vectorResultType) {
         ctx,
         SmallVector<Type>(static_cast<size_t>(a.getNumElements()) * 2, f32Ty));
   }
-  if (a.getElementType() == LLVM::getFixedVectorType(f32Ty, 1)) {
+  if (a.getElementType() == VectorType::get(1, f32Ty)) {
     return LLVM::LLVMStructType::getLiteral(
         ctx, SmallVector<Type>(static_cast<size_t>(a.getNumElements()), f32Ty));
   }
@@ -106,11 +106,11 @@ static Value convertIntrinsicResult(Location loc, Type intrinsicResultType,
   Type i32Ty = rewriter.getI32Type();
   Type f32Ty = rewriter.getF32Type();
   Type f64Ty = rewriter.getF64Type();
-  Type f16x2Ty = LLVM::getFixedVectorType(rewriter.getF16Type(), 2);
-  Type i32x2Ty = LLVM::getFixedVectorType(i32Ty, 2);
-  Type f64x2Ty = LLVM::getFixedVectorType(f64Ty, 2);
-  Type f32x2Ty = LLVM::getFixedVectorType(f32Ty, 2);
-  Type f32x1Ty = LLVM::getFixedVectorType(f32Ty, 1);
+  Type f16x2Ty = VectorType::get(2, rewriter.getF16Type());
+  Type i32x2Ty = VectorType::get(2, i32Ty);
+  Type f64x2Ty = VectorType::get(2, f64Ty);
+  Type f32x2Ty = VectorType::get(2, f32Ty);
+  Type f32x1Ty = VectorType::get(1, f32Ty);
 
   auto makeConst = [&](int32_t index) -> Value {
     return rewriter.create<LLVM::ConstantOp>(loc, IntegerType::get(ctx, 32),
@@ -181,9 +181,9 @@ static SmallVector<Value> unpackOperandVector(ImplicitLocOpBuilder &b,
   Type f64Ty = b.getF64Type();
   Type f32Ty = b.getF32Type();
   Type i64Ty = b.getI64Type();
-  Type i8x4Ty = LLVM::getFixedVectorType(b.getI8Type(), 4);
-  Type i4x8Ty = LLVM::getFixedVectorType(b.getIntegerType(4), 8);
-  Type f32x1Ty = LLVM::getFixedVectorType(f32Ty, 1);
+  Type i8x4Ty = VectorType::get(4, b.getI8Type());
+  Type i4x8Ty = VectorType::get(8, b.getIntegerType(4));
+  Type f32x1Ty = VectorType::get(1, f32Ty);
   auto arrayTy = cast<LLVM::LLVMArrayType>(operand.getType());
 
   for (unsigned i = 0, e = arrayTy.getNumElements(); i < e; ++i) {
@@ -268,8 +268,8 @@ struct MmaLdMatrixOpToNVVM : public ConvertOpToLLVMPattern<nvgpu::LdMatrixOp> {
     if (!vectorResultType) {
       return failure();
     }
-    Type innerVectorType = LLVM::getFixedVectorType(
-        vectorResultType.getElementType(), vectorResultType.getDimSize(1));
+    Type innerVectorType = VectorType::get(vectorResultType.getDimSize(1),
+                                           vectorResultType.getElementType());
 
     int64_t num32BitRegs = vectorResultType.getDimSize(0);
 
@@ -627,8 +627,7 @@ struct NVGPUMmaSparseSyncLowering
 
     // Bitcast the sparse metadata from vector<2xf16> to an i32.
     Value sparseMetadata = adaptor.getSparseMetadata();
-    if (sparseMetadata.getType() !=
-        LLVM::getFixedVectorType(rewriter.getI16Type(), 2))
+    if (sparseMetadata.getType() != VectorType::get(2, rewriter.getI16Type()))
       return op->emitOpError() << "Expected metadata type to be LLVM "
                                   "VectorType of 2 i16 elements";
     sparseMetadata =
diff --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
index b3c2a29309528..29cf38c1fefea 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
@@ -851,18 +851,6 @@ Type mlir::LLVM::getVectorType(Type elementType,
                        /*isScalable=*/false);
 }
 
-Type mlir::LLVM::getFixedVectorType(Type elementType, unsigned numElements) {
-  assert(VectorType::isValidElementType(elementType) &&
-         "incompatible element type");
-  return VectorType::get(numElements, elementType);
-}
-
-Type mlir::LLVM::getScalableVectorType(Type elementType, unsigned numElements) {
-  // LLVM vectors are always 1-D, hence only 1 bool is required to mark it as
-  // scalable/non-scalable.
-  return VectorType::get(numElements, elementType, /*scalableDims=*/true);
-}
-
 llvm::TypeSize mlir::LLVM::getPrimitiveTypeSizeInBits(Type type) {
   assert(isCompatibleType(type) &&
          "expected a type compatible with the LLVM dialect");
diff --git a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
index 09bff6101edd3..b9d6952f67671 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
@@ -144,7 +144,7 @@ LogicalResult BulkStoreOp::verify() {
 std::optional<mlir::NVVM::MMATypes>
 MmaOp::inferOperandMMAType(Type operandElType, bool isAccumulator) {
   auto half2Type =
-      LLVM::getFixedVectorType(Float16Type::get(operandElType.getContext()), 2);
+      VectorType::get(2, Float16Type::get(operandElType.getContext()));
   if (operandElType.isF64())
     return NVVM::MMATypes::f64;
   if (operandElType.isF16() || operandElType == half2Type)
@@ -243,7 +243,8 @@ void MmaOp::print(OpAsmPrinter &p) {
   p.printOptionalAttrDict(this->getOperation()->getAttrs(), ignoreAttrNames);
 
   // Print the types of the operands and result.
-  p << " : " << "(";
+  p << " : "
+    << "(";
   llvm::interleaveComma(SmallVector<Type, 3>{frags[0].regs[0].getType(),
                                              frags[1].regs[0].getType(),
                                              frags[2].regs[0].getType()},
@@ -404,7 +405,7 @@ LogicalResult MmaOp::verify() {
   MLIRContext *context = getContext();
   auto f16Ty = Float16Type::get(context);
   auto i32Ty = IntegerType::get(context, 32);
-  auto f16x2Ty = LLVM::getFixedVectorType(f16Ty, 2);
+  auto f16x2Ty = VectorType::get(2, f16Ty);
   auto f32Ty = Float32Type::get(context);
   auto f16x2x4StructTy = LLVM::LLVMStructType::getLiteral(
       context, {f16x2Ty, f16x2Ty, f16x2Ty, f16x2Ty});
@@ -506,7 +507,7 @@ LogicalResult MmaOp::verify() {
       expectedA.emplace_back(1, f64Ty);
       expectedB.emplace_back(1, f64Ty);
       expectedC.emplace_back(2, f64Ty);
-      // expectedC.emplace_back(1, LLVM::getFixedVectorType(f64Ty, 2));
+      // expectedC.emplace_back(1, VectorType::get(2, f64Ty));
       expectedResult.emplace_back(LLVM::LLVMStructType::getLiteral(
           context, SmallVector<Type>(2, f64Ty)));
       allowedShapes.push_back({8, 8, 4});
@@ -992,7 +993,9 @@ std::string NVVM::WgmmaMmaAsyncOp::getPtx() {
   ss << "},";
   // Need to map read/write registers correctly.
   regCnt = (regCnt * 2);
-  ss << " $" << (regCnt) << "," << " $" << (regCnt + 1) << "," << " p";
+  ss << " $" << (regCnt) << ","
+     << " $" << (regCnt + 1) << ","
+     << " p";
   if (getTypeD() != WGMMATypes::s32) {
     ss << ", $" << (regCnt + 3) << ",  $" << (regCnt + 4);
   }
@@ -1219,7 +1222,7 @@ llvm::Intrinsic::ID CpAsyncBulkTensorPrefetchOp::getIntrinsicID(int tensorDims,
             : CP_ASYNC_BULK_TENSOR_REDUCE_MODE(op, dim, tile)
 
 #define GET_CP_ASYNC_BULK_TENSOR_ID(op, dims, is_im2col)                       \
-  [&]() -> auto {                                                              \
+  [&]() -> auto{                                                               \
     switch (dims) {                                                            \
     case 1:                                                                    \
       return CP_ASYNC_BULK_TENSOR_REDUCE_MODE(op, 1, tile);                    \
@@ -1234,7 +1237,8 @@ llvm::Intrinsic::ID CpAsyncBulkTensorPrefetchOp::getIntrinsicID(int tensorDims,
     default:                                                                   \
       llvm_unreachable("Invalid TensorDim in CpAsyncBulkTensorReduceOp.");     \
     }                                                                          \
-  }()
+  }                                                                            \
+  ()
 
 llvm::Intrinsic::ID CpAsyncBulkTensorReduceOp::getIntrinsicID(
     int tensorDims, NVVM::TMAReduxKind kind, bool isIm2Col) {
@@ -1364,13 +1368,14 @@ Tcgen05CommitOp::getIntrinsicIDAndArgs(Operation &op,
           : TCGEN05_CP_IMPL(shape_mc, src_fmt, _cg1)
 
 #define GET_TCGEN05_CP_ID(shape_mc, src_fmt, is_2cta)                          \
-  [&]() -> auto {                                                              \
+  [&]() -> auto{                                                               \
     if (src_fmt == Tcgen05CpSrcFormat::B6x16_P32)                              \
       return TCGEN05_CP_2CTA(shape_mc, _b6x16_p32, is_2cta);                   \
     if (src_fmt == Tcgen05CpSrcFormat::B4x16_P64)                              \
       return TCGEN05_CP_2CTA(shape_mc, _b4x16_p64, is_2cta);                   \
     return TCGEN05_CP_2CTA(shape_mc, , is_2cta);                               \
-  }()
+  }                                                                            \
+  ()
 
 llvm::Intrinsic::ID Tcgen05CpOp::getIntrinsicID(Operation &op) {
   auto curOp = cast<NVVM::Tcgen05CpOp>(op);
diff --git a/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp b/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
index 39cca7d363e0d..e80360aa08ed5 100644
--- a/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
+++ b/mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
@@ -103,16 +103,15 @@ nvgpu::getMmaSyncRegisterType(const WarpMatrixInfo &type) {
 
   Type elType = type.vectorType.getElementType();
   if (elType.isF16()) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(Float16Type::get(ctx), 2), 2, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(2, Float16Type::get(ctx)), 2, 32,
+                               inferNumRegistersPerMatrixFragment(type)};
   }
 
   // f64 operand
   Type f64Ty = Float64Type::get(ctx);
   if (elType.isF64()) {
     return isAccum
-               ? FragmentElementInfo{LLVM::getFixedVectorType(f64Ty, 2), 2, 128,
+               ? FragmentElementInfo{VectorType::get(2, f64Ty), 2, 128,
                                      inferNumRegistersPerMatrixFragment(type)}
                : FragmentElementInfo{f64Ty, 1, 64,
                                      inferNumRegistersPerMatrixFragment(type)};
@@ -120,30 +119,27 @@ nvgpu::getMmaSyncRegisterType(const WarpMatrixInfo &type) {
 
   // int8 operand
   if (elType.isInteger(8)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 8), 4), 4, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(4, IntegerType::get(ctx, 8)), 4,
+                               32, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // int4 operand
   if (elType.isInteger(4)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 4), 8), 8, 32,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(8, IntegerType::get(ctx, 4)), 8,
+                               32, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // Integer 32bit acc operands
   if (elType.isInteger(32)) {
-    return FragmentElementInfo{
-        LLVM::getFixedVectorType(IntegerType::get(ctx, 32), 2), 2, 64,
-        inferNumRegistersPerMatrixFragment(type)};
+    return FragmentElementInfo{VectorType::get(2, IntegerType::get(ctx, 32)), 2,
+                               64, inferNumRegistersPerMatrixFragment(type)};
   }
 
   // Floating point 32bit operands
   if (elType.isF32()) {
     Type f32Ty = Float32Type::get(ctx);
     return isAccum
-               ? FragmentElementInfo{LLVM::getFixedVectorType(f32Ty, 2), 2, 64,
+               ? FragmentElementInfo{VectorType::get(2, f32Ty), 2, 64,
                                      inferNumRegistersPerMatrixFragment(type)}
                : FragmentElementInfo{f32Ty, 1, 32,
                                      inferNumRegistersPerMatrixFragment(type)};
diff --git a/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp b/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
index bc9765fff2953..c46aa3e80d51a 100644
--- a/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
+++ b/mlir/lib/Target/LLVMIR/TypeFromLLVM.cpp
@@ -124,14 +124,15 @@ class TypeFromLLVMIRTranslatorImpl {
 
   /// Translates the given fixed-vector type.
   Type translate(llvm::FixedVectorType *type) {
-    return LLVM::getFixedVectorType(translateType(type->getElementType()),
-                                    type->getNumElements());
+    return VectorType::get(type->getNumElements(),
+                           translateType(type->getElementType()));
   }
 
   /// Translates the given scalable-vector type.
   Type translate(llvm::ScalableVectorType *type) {
-    return LLVM::getScalableVectorType(translateType(type->getElementType()),
-                                       type->getMinNumElements());
+    return VectorType::get(type->getMinNumElements(),
+                           translateType(type->getElementType()),
+                           /*scalable=*/true);
   }
 
   /// Translates the given target extension type.

Copy link

github-actions bot commented Apr 9, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

Base automatically changed from users/matthias-springer/drop_get_element_type to main April 9, 2025 19:35
@matthias-springer matthias-springer force-pushed the users/matthias-springer/delete_get_vec_type branch from 120778f to 0982387 Compare April 9, 2025 19:39
@matthias-springer matthias-springer force-pushed the users/matthias-springer/delete_get_vec_type branch from 0982387 to ce792f0 Compare April 9, 2025 19:40
Copy link
Contributor

@Dinistro Dinistro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the cleanup.

@matthias-springer matthias-springer merged commit 85742f7 into main Apr 10, 2025
11 checks passed
@matthias-springer matthias-springer deleted the users/matthias-springer/delete_get_vec_type branch April 10, 2025 08:36
matthias-springer added a commit that referenced this pull request Apr 10, 2025
@Sterling-Augustine
Copy link
Contributor

I have tracked down a crash in the slp vectorizer to this PR. Unfortunately, it is deep inside some highly complex private code when optimizing with thinlto. I'm working on a reduced testcase, but until then, here is the stack trace of the failure.

1.	Running pass "function<eager-inv>(float2int,lower-constant-intrinsics,chr,loop(loop-rotate<header-duplication;no-prepare-for-lto>,loop-deletion),loop-distribute,inject-tli-mappings,loop-vectorize<no-interleave-forced-only;no-vectorize-forced-only;>,infer-alignment,loop-load-elim,instcombine<max-iterations=1;no-verify-fixpoint>,simplifycfg<bonus-inst-threshold=1;forward-switch-cond;switch-range-to-icmp;switch-to-lookup;no-keep-loops;hoist-common-insts;no-hoist-loads-stores-with-cond-faulting;sink-common-insts;speculate-blocks;simplify-cond-branch;no-speculate-unpredictables>,slp-vectorizer,vector-combine,instcombine<max-iterations=1;no-verify-fixpoint>,loop-unroll<O3>,transform-warning,sroa<preserve-cfg>,infer-alignment,instcombine<max-iterations=1;no-verify-fixpoint>,loop-mssa(licm<allowspeculation>),alignment-from-assumptions,loop-sink,instsimplify,div-rem-pairs,tailcallelim,simplifycfg<bonus-inst-threshold=1;no-forward-switch-cond;switch-range-to-icmp;no-switch-to-lookup;keep-loops;no-hoist-common-insts;hoist-loads-stores-with-cond-faulting;no-sink-common-insts;speculate-blocks;simplify-cond-branch;speculate-unpredictables>)" on module "blaze-out/k8-opt/bin/spanner/storage/internal/_objs/block_format/block_data.pic.o"
2.	Running pass "slp-vectorizer" on function "_ZN7spanner7storage15RLVectorBuilderImE8ByteSizeENS0_9SizeBoundE"
 #0 0x00005646f63e1c9a llvm::sys::RunSignalHandlers() /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Support/Signals.cpp:106:18
 #1 0x00005646f63a4ff6 (anonymous namespace)::CrashRecoveryContextImpl::HandleCrash(int, unsigned long) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Support/CrashRecoveryContext.cpp:73:5
 #2 0x00005646f63a4ff6 CrashRecoverySignalHandler(int) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Support/CrashRecoveryContext.cpp:390:51
 #3 0x00007f6f6c107e80 __restore_rt (/usr/grte/v5/lib64/libpthread.so.0+0x14e80)
 #4 0x00005646f5688a98 llvm::slpvectorizer::BoUpSLP::getOperandEntry(llvm::slpvectorizer::BoUpSLP::TreeEntry const*, unsigned int) const /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:12356:16
 #5 0x00005646f566c33d llvm::slpvectorizer::BoUpSLP::getOperandEntry(llvm::slpvectorizer::BoUpSLP::TreeEntry*, unsigned int) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:3129:9
 #6 0x00005646f566c33d llvm::slpvectorizer::BoUpSLP::reorderBottomToTop(bool) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:7292:37
 #7 0x00005646f56bc3fa llvm::SLPVectorizerPass::tryToVectorizeList(llvm::ArrayRef<llvm::Value*>, llvm::slpvectorizer::BoUpSLP&, bool) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:21061:9
 #8 0x00005646f56c1693 llvm::function_ref<bool (llvm::ArrayRef<llvm::Value*>, bool)>::operator()(llvm::ArrayRef<llvm::Value*>, bool) const /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/STLFunctionalExtras.h:0:12
 #9 0x00005646f56c1693 bool tryToVectorizeSequence<llvm::Value>(llvm::SmallVectorImpl<llvm::Value*>&, llvm::function_ref<bool (llvm::Value*, llvm::Value*)>, llvm::function_ref<bool (llvm::Value*, llvm::Value*)>, llvm::function_ref<bool (llvm::ArrayRef<llvm::Value*>, bool)>, bool, llvm::slpvectorizer::BoUpSLP&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:23305:11
#10 0x00005646f56b6d10 llvm::SLPVectorizerPass::vectorizeChainsInBlock(llvm::BasicBlock*, llvm::slpvectorizer::BoUpSLP&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:23720:30
#11 0x00005646f56b4ea7 llvm::SLPVectorizerPass::runImpl(llvm::Function&, llvm::ScalarEvolution*, llvm::TargetTransformInfo*, llvm::TargetLibraryInfo*, llvm::AAResults*, llvm::LoopInfo*, llvm::DominatorTree*, llvm::AssumptionCache*, llvm::DemandedBits*, llvm::OptimizationRemarkEmitter*) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:20352:16
#12 0x00005646f56b4992 llvm::SLPVectorizerPass::run(llvm::Function&, llvm::AnalysisManager<llvm::Function>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:20280:18
#13 0x00005646f490fc32 llvm::detail::PassModel<llvm::Function, llvm::SLPVectorizerPass, llvm::AnalysisManager<llvm::Function>>::run(llvm::Function&, llvm::AnalysisManager<llvm::Function>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/PassManagerInternal.h:91:5
#14 0x00005646f6224f59 llvm::PassManager<llvm::Function, llvm::AnalysisManager<llvm::Function>>::run(llvm::Function&, llvm::AnalysisManager<llvm::Function>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/PassManagerImpl.h:85:8
#15 0x00005646f14c8212 llvm::detail::PassModel<llvm::Function, llvm::PassManager<llvm::Function, llvm::AnalysisManager<llvm::Function>>, llvm::AnalysisManager<llvm::Function>>::run(llvm::Function&, llvm::AnalysisManager<llvm::Function>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/PassManagerInternal.h:91:5
#16 0x00005646f6227a10 llvm::ModuleToFunctionPassAdaptor::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/IR/PassManager.cpp:129:23
#17 0x00005646f14c45b2 llvm::detail::PassModel<llvm::Module, llvm::ModuleToFunctionPassAdaptor, llvm::AnalysisManager<llvm::Module>>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/PassManagerInternal.h:91:5
#18 0x00005646f6224219 llvm::PassManager<llvm::Module, llvm::AnalysisManager<llvm::Module>>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/PassManagerImpl.h:85:8
#19 0x00005646f1768700 llvm::SmallPtrSetImplBase::~SmallPtrSetImplBase() /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/SmallPtrSet.h:86:9
#20 0x00005646f1768700 llvm::PreservedAnalyses::~PreservedAnalyses() /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/IR/Analysis.h:111:7
#21 0x00005646f1768700 runNewPMPasses(llvm::lto::Config const&, llvm::Module&, llvm::TargetMachine*, unsigned int, bool, llvm::ModuleSummaryIndex*, llvm::ModuleSummaryIndex const*) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/LTO/LTOBackend.cpp:356:3
#22 0x00005646f1768700 llvm::lto::opt(llvm::lto::Config const&, llvm::TargetMachine*, unsigned int, llvm::Module&, bool, llvm::ModuleSummaryIndex*, llvm::ModuleSummaryIndex const*, std::__u::vector<unsigned char, std::__u::allocator<unsigned char>> const&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/LTO/LTOBackend.cpp:395:5
#23 0x00005646f176ad87 llvm::lto::thinBackend(llvm::lto::Config const&, unsigned int, std::__u::function<llvm::Expected<std::__u::unique_ptr<llvm::CachedFileStream, std::__u::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, llvm::Module&, llvm::ModuleSummaryIndex const&, llvm::FunctionImporter::ImportMapTy const&, llvm::DenseMap<unsigned long, llvm::GlobalValueSummary*, llvm::DenseMapInfo<unsigned long, void>, llvm::detail::DenseMapPair<unsigned long, llvm::GlobalValueSummary*>> const&, llvm::MapVector<llvm::StringRef, llvm::BitcodeModule, llvm::DenseMap<llvm::StringRef, unsigned int, llvm::DenseMapInfo<llvm::StringRef, void>, llvm::detail::DenseMapPair<llvm::StringRef, unsigned int>>, llvm::SmallVector<std::__u::pair<llvm::StringRef, llvm::BitcodeModule>, 0u>>*, bool, std::__u::function<llvm::Expected<std::__u::unique_ptr<llvm::CachedFileStream, std::__u::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, std::__u::vector<unsigned char, std::__u::allocator<unsigned char>> const&)::$_1::operator()(llvm::Module&, llvm::TargetMachine*, std::__u::unique_ptr<llvm::ToolOutputFile, std::__u::default_delete<llvm::ToolOutputFile>>) const /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/LTO/LTOBackend.cpp:633:13
#24 0x00005646f176ac24 llvm::lto::thinBackend(llvm::lto::Config const&, unsigned int, std::__u::function<llvm::Expected<std::__u::unique_ptr<llvm::CachedFileStream, std::__u::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, llvm::Module&, llvm::ModuleSummaryIndex const&, llvm::FunctionImporter::ImportMapTy const&, llvm::DenseMap<unsigned long, llvm::GlobalValueSummary*, llvm::DenseMapInfo<unsigned long, void>, llvm::detail::DenseMapPair<unsigned long, llvm::GlobalValueSummary*>> const&, llvm::MapVector<llvm::StringRef, llvm::BitcodeModule, llvm::DenseMap<llvm::StringRef, unsigned int, llvm::DenseMapInfo<llvm::StringRef, void>, llvm::detail::DenseMapPair<llvm::StringRef, unsigned int>>, llvm::SmallVector<std::__u::pair<llvm::StringRef, llvm::BitcodeModule>, 0u>>*, bool, std::__u::function<llvm::Expected<std::__u::unique_ptr<llvm::CachedFileStream, std::__u::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, std::__u::vector<unsigned char, std::__u::allocator<unsigned char>> const&) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/LTO/LTOBackend.cpp:0:10
#25 0x00005646f14b9339 runThinLTOBackend(clang::CompilerInstance&, llvm::ModuleSummaryIndex*, llvm::Module*, std::__u::unique_ptr<llvm::raw_pwrite_stream, std::__u::default_delete<llvm::raw_pwrite_stream>>, std::__u::basic_string<char, std::__u::char_traits<char>, std::__u::allocator<char>>, std::__u::basic_string<char, std::__u::char_traits<char>, std::__u::allocator<char>>, clang::BackendAction) /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/CodeGen/BackendUtil.cpp:1375:11
#26 0x00005646f14b9339 clang::emitBackendOutput(clang::CompilerInstance&, clang::CodeGenOptions&, llvm::StringRef, llvm::Module*, clang::BackendAction, llvm::IntrusiveRefCntPtr<llvm::vfs::FileSystem>, std::__u::unique_ptr<llvm::raw_pwrite_stream, std::__u::default_delete<llvm::raw_pwrite_stream>>, clang::BackendConsumer*) /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/CodeGen/BackendUtil.cpp:1415:9
#27 0x00005646f122e598 llvm::IntrusiveRefCntPtr<llvm::vfs::FileSystem>::release() /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/IntrusiveRefCntPtr.h:232:9
#28 0x00005646f122e598 llvm::IntrusiveRefCntPtr<llvm::vfs::FileSystem>::~IntrusiveRefCntPtr() /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/IntrusiveRefCntPtr.h:196:27
#29 0x00005646f122e598 clang::CodeGenAction::ExecuteAction() /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/CodeGen/CodeGenAction.cpp:1187:3
#30 0x00005646f1cecd01 clang::FrontendAction::Execute() /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Frontend/FrontendAction.cpp:1231:10
#31 0x00005646f1c66704 llvm::Error::getPtr() const /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/Support/Error.h:281:12
#32 0x00005646f1c66704 llvm::Error::operator bool() /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/Support/Error.h:241:16
#33 0x00005646f1c66704 clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Frontend/CompilerInstance.cpp:1055:23
#34 0x00005646f122833f clang::ExecuteCompilerInvocation(clang::CompilerInstance*) /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/FrontendTool/ExecuteCompilerInvocation.cpp:300:25
#35 0x00005646f121de91 cc1_main(llvm::ArrayRef<char const*>, char const*, void*) /proc/self/cwd/third_party/llvm/llvm-project/clang/tools/driver/cc1_main.cpp:294:15
#36 0x00005646f121afa9 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&, llvm::ToolContext const&) /proc/self/cwd/third_party/llvm/llvm-project/clang/tools/driver/driver.cpp:218:12
#37 0x00005646f1dfed3e clang::driver::CC1Command::Execute(llvm::ArrayRef<std::__u::optional<llvm::StringRef>>, std::__u::basic_string<char, std::__u::char_traits<char>, std::__u::allocator<char>>*, bool*) const::$_0::operator()() const /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Driver/Job.cpp:435:30
#38 0x00005646f1dfed3e void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<std::__u::optional<llvm::StringRef>>, std::__u::basic_string<char, std::__u::char_traits<char>, std::__u::allocator<char>>*, bool*) const::$_0>(long) /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/STLFunctionalExtras.h:46:12
#39 0x00005646f63a4dcf llvm::function_ref<void ()>::operator()() const /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/STLFunctionalExtras.h:0:12
#40 0x00005646f63a4dcf llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) /proc/self/cwd/third_party/llvm/llvm-project/llvm/lib/Support/CrashRecoveryContext.cpp:426:3
#41 0x00005646f1dfe41a clang::driver::CC1Command::Execute(llvm::ArrayRef<std::__u::optional<llvm::StringRef>>, std::__u::basic_string<char, std::__u::char_traits<char>, std::__u::allocator<char>>*, bool*) const /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Driver/Job.cpp:435:7
#42 0x00005646f1dc18dd clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&, bool) const /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Driver/Compilation.cpp:196:15
#43 0x00005646f1dc1acf clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::__u::pair<int, clang::driver::Command const*>>&, bool) const /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Driver/Compilation.cpp:251:13
#44 0x00005646f1dde910 llvm::SmallVectorBase<unsigned int>::empty() const /proc/self/cwd/third_party/llvm/llvm-project/llvm/include/llvm/ADT/SmallVector.h:81:46
#45 0x00005646f1dde910 clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::__u::pair<int, clang::driver::Command const*>>&) /proc/self/cwd/third_party/llvm/llvm-project/clang/lib/Driver/Driver.cpp:2223:23
#46 0x00005646f121a22f clang_main(int, char**, llvm::ToolContext const&) /proc/self/cwd/third_party/llvm/llvm-project/clang/tools/driver/driver.cpp:402:21
#47 0x00005646f12189b4 main /proc/self/cwd/blaze-out/k8-opt/bin/third_party/llvm/llvm-project/clang/clang-driver.cpp:17:10
#48 0x00007f6f6bf983d4 __libc_start_main (/usr/grte/v5/lib64/libc.so.6+0x613d4)
#49 0x00005646f12188ea _start /usr/grte/v5/debug-src/src/csu/../sysdeps/x86_64/start.S:122:0
clang: error: clang frontend command failed with exit code 139 (use -v to see invocation)

@Sterling-Augustine
Copy link
Contributor

Progress is slow going, but the failing intrinsic is llvm.umax, and it fails because it has no operands at all, which is why getOperandEntry asserts.

@joker-eph
Copy link
Collaborator

@Sterling-Augustine while the title talks about "Vector"-things, this is a MLIR only change and I can't see how it can crash clang. I suspect your test is flaky or your bisection didn't go as expected.

var-const pushed a commit to ldionne/llvm-project that referenced this pull request Apr 17, 2025
…lvm#135051)

The LLVM dialect no longer has its own vector types. It uses
`mlir::VectorType` everywhere. Remove
`LLVM::getFixedVectorType/getScalableVectorType` and use
`VectorType::get` instead. This commit addresses a
[comment](llvm#133286 (comment))
on the PR that deleted the LLVM vector types.
var-const pushed a commit to ldionne/llvm-project that referenced this pull request Apr 17, 2025
bcardosolopes pushed a commit to llvm/clangir that referenced this pull request Apr 30, 2025
…et (#1585)

Use VectorType::get instead of the `getFixedVectorType` function because
it's already removed PR llvm/llvm-project#135051
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants