Skip to content

[mlir][NFC] update mlir/Dialect create APIs (32/n) #150657

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 25, 2025

Conversation

makslevental
Copy link
Contributor

See #147168 for more info.

@llvmbot
Copy link
Member

llvmbot commented Jul 25, 2025

@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-mlir-linalg

Author: Maksim Levental (makslevental)

Changes

See #147168 for more info.


Patch is 26.21 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/150657.diff

20 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+2-4)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp (+2-4)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/PackAndUnpackPatterns.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Padding.cpp (+2-4)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/TransposeConv2D.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/WinogradConv2D.cpp (+8-16)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+1-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorGather.cpp (+3-4)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp (+18-21)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorEmulateNarrowType.cpp (+12-14)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferSplitRewritePatterns.cpp (+17-18)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp (+1-2)
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 4fee81aa2ef67..b154c69d28148 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -791,8 +791,7 @@ struct FoldFillWithPad final : public OpRewritePattern<tensor::PadOp> {
         tensor::EmptyOp::create(rewriter, padOp.getLoc(), reifiedShape.front(),
                                 padOp.getResultType().getElementType());
     Value replacement =
-        rewriter
-            .create<FillOp>(fillOp.getLoc(), ValueRange{padValue},
+        FillOp::create(rewriter, fillOp.getLoc(), ValueRange{padValue},
                             ValueRange{emptyTensor})
             .getResult(0);
     if (replacement.getType() != padOp.getResultType()) {
@@ -2154,8 +2153,7 @@ struct SwapTransposeWithBroadcast : OpRewritePattern<linalg::TransposeOp> {
 
     // Create broadcast(transpose(input)).
     Value transposeResult =
-        rewriter
-            .create<TransposeOp>(loc, broadcastOp.getInput(), transposeInit,
+        TransposeOp::create(rewriter, loc, broadcastOp.getInput(), transposeInit,
                                  resultPerms)
             ->getResult(0);
     rewriter.replaceOpWithNewOp<BroadcastOp>(
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index bb0861340ad92..6625267f07d68 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -4133,8 +4133,7 @@ DiagnosedSilenceableFailure doit(RewriterBase &rewriter, OpTy target,
   Value extracted = tensor::ExtractSliceOp::create(
       rewriter, target.getLoc(), target.getDest(), target.getMixedOffsets(),
       target.getMixedSizes(), target.getMixedStrides());
-  Value copied = rewriter
-                     .create<linalg::CopyOp>(target.getLoc(),
+  Value copied = linalg::CopyOp::create(rewriter, target.getLoc(),
                                              target.getSource(), extracted)
                      .getResult(0);
   // Reset the insertion point.
diff --git a/mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp b/mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
index 91a297f7b9db7..6dc5bf3a15da4 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
@@ -1143,8 +1143,7 @@ pushDownUnPackOpThroughGenericOp(RewriterBase &rewriter, GenericOp genericOp,
 
   // Insert an unPackOp right after the packed generic.
   Value unPackOpRes =
-      rewriter
-          .create<linalg::UnPackOp>(genericOp.getLoc(), newResult,
+      linalg::UnPackOp::create(rewriter, genericOp.getLoc(), newResult,
                                     destPack.getSource(), innerDimsPos,
                                     mixedTiles, outerDimsPerm)
           .getResult();
diff --git a/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp b/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
index 745a40dbc4eea..d3af23b62215d 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
@@ -267,8 +267,7 @@ expandValue(RewriterBase &rewriter, Location loc, Value result, Value origDest,
   assert(rankReductionStrategy ==
              ControlDropUnitDims::RankReductionStrategy::ReassociativeReshape &&
          "unknown rank reduction strategy");
-  return rewriter
-      .create<tensor::ExpandShapeOp>(loc, origResultType, result, reassociation)
+  return tensor::ExpandShapeOp::create(rewriter, loc, origResultType, result, reassociation)
       .getResult();
 }
 
diff --git a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
index 4a66b8b9619f4..92342abcc5af3 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
@@ -1572,12 +1572,10 @@ static Value getCollapsedOpOperand(Location loc, LinalgOp op,
 
   // Insert a reshape to collapse the dimensions.
   if (isa<MemRefType>(operand.getType())) {
-    return builder
-        .create<memref::CollapseShapeOp>(loc, operand, operandReassociation)
+    return memref::CollapseShapeOp::create(builder, loc, operand, operandReassociation)
         .getResult();
   }
-  return builder
-      .create<tensor::CollapseShapeOp>(loc, operand, operandReassociation)
+  return tensor::CollapseShapeOp::create(builder, loc, operand, operandReassociation)
       .getResult();
 }
 
diff --git a/mlir/lib/Dialect/Linalg/Transforms/PackAndUnpackPatterns.cpp b/mlir/lib/Dialect/Linalg/Transforms/PackAndUnpackPatterns.cpp
index a45a4e314e511..091266e49db4a 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/PackAndUnpackPatterns.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/PackAndUnpackPatterns.cpp
@@ -81,8 +81,7 @@ struct SimplifyPackToExpandShape : public OpRewritePattern<PackOp> {
                ArrayRef<ReassociationIndices> reassociation) const {
     if (operand.getType() == newOperandType)
       return operand;
-    return rewriter
-        .create<tensor::ExpandShapeOp>(loc, newOperandType, operand,
+    return tensor::ExpandShapeOp::create(rewriter, loc, newOperandType, operand,
                                        reassociation)
         .getResult();
   }
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp b/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
index b5c5aea56a998..e4182b1451751 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
@@ -333,16 +333,14 @@ linalg::rewriteAsPaddedOp(RewriterBase &rewriter, LinalgOp opToPad,
   for (auto it :
        llvm::zip(paddedSubtensorResults, opToPad.getDpsInitsMutable())) {
     if (options.copyBackOp == LinalgPaddingOptions::CopyBackOp::LinalgCopy) {
-      replacements.push_back(rewriter
-                                 .create<linalg::CopyOp>(loc, std::get<0>(it),
+      replacements.push_back(linalg::CopyOp::create(rewriter, loc, std::get<0>(it),
                                                          std::get<1>(it).get())
                                  .getResult(0));
     } else if (options.copyBackOp ==
                LinalgPaddingOptions::CopyBackOp::
                    BufferizationMaterializeInDestination) {
       replacements.push_back(
-          rewriter
-              .create<bufferization::MaterializeInDestinationOp>(
+          bufferization::MaterializeInDestinationOp::create(rewriter,
                   loc, std::get<0>(it), std::get<1>(it).get())
               ->getResult(0));
     } else {
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp b/mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
index 1f1e617738981..475b0f94779c5 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
@@ -947,8 +947,7 @@ DecomposePadOpPattern::matchAndRewrite(tensor::PadOp padOp,
   auto getIdxValue = [&](OpFoldResult ofr) {
     if (auto val = llvm::dyn_cast_if_present<Value>(ofr))
       return val;
-    return rewriter
-        .create<arith::ConstantIndexOp>(
+    return arith::ConstantIndexOp::create(rewriter,
             padOp.getLoc(), cast<IntegerAttr>(cast<Attribute>(ofr)).getInt())
         .getResult();
   };
diff --git a/mlir/lib/Dialect/Linalg/Transforms/TransposeConv2D.cpp b/mlir/lib/Dialect/Linalg/Transforms/TransposeConv2D.cpp
index 99fb8c796cf06..20fb22334dd38 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/TransposeConv2D.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/TransposeConv2D.cpp
@@ -70,8 +70,7 @@ FailureOr<Operation *> transposeConv2DHelper(RewriterBase &rewriter,
     input = tensor::EmptyOp::create(rewriter, loc, newFilterShape, elementTy)
                 .getResult();
   } else {
-    input = rewriter
-                .create<memref::AllocOp>(
+    input = memref::AllocOp::create(rewriter,
                     loc, MemRefType::get(newFilterShape, elementTy))
                 .getResult();
   }
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index ae627da5445a8..4733d617f0dd4 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -3714,8 +3714,7 @@ struct Conv1DGenerator
     }
     }
 
-    return rewriter
-        .create<vector::TransferWriteOp>(loc, res, resShaped, resPadding)
+    return vector::TransferWriteOp::create(rewriter, loc, res, resShaped, resPadding)
         .getOperation();
   }
 
diff --git a/mlir/lib/Dialect/Linalg/Transforms/WinogradConv2D.cpp b/mlir/lib/Dialect/Linalg/Transforms/WinogradConv2D.cpp
index 669fefcd86de1..da8ff88ccebfe 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/WinogradConv2D.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/WinogradConv2D.cpp
@@ -399,8 +399,7 @@ Value filterTransform(RewriterBase &rewriter, Location loc, Value filter,
       retRows = GMatrix.rows;
       auto matmulType = RankedTensorType::get({retRows, filterW}, elementType);
       auto empty =
-          builder
-              .create<tensor::EmptyOp>(loc, matmulType.getShape(), elementType)
+          tensor::EmptyOp::create(builder, loc, matmulType.getShape(), elementType)
               .getResult();
       auto init =
           linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -423,8 +422,7 @@ Value filterTransform(RewriterBase &rewriter, Location loc, Value filter,
       auto matmulType =
           RankedTensorType::get({retRows, GTMatrix.cols}, elementType);
       auto empty =
-          builder
-              .create<tensor::EmptyOp>(loc, matmulType.getShape(), elementType)
+          tensor::EmptyOp::create(builder, loc, matmulType.getShape(), elementType)
               .getResult();
       auto init =
           linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -548,8 +546,7 @@ Value inputTransform(RewriterBase &rewriter, Location loc, Value input,
       retRows = BTMatrix.rows;
       auto matmulType = RankedTensorType::get({retRows, alphaW}, elementType);
       auto empty =
-          builder
-              .create<tensor::EmptyOp>(loc, matmulType.getShape(), elementType)
+          tensor::EmptyOp::create(builder, loc, matmulType.getShape(), elementType)
               .getResult();
       auto init =
           linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -573,8 +570,7 @@ Value inputTransform(RewriterBase &rewriter, Location loc, Value input,
       retCols = BMatrix.cols;
       auto matmulType = RankedTensorType::get({retRows, retCols}, elementType);
       auto empty =
-          builder
-              .create<tensor::EmptyOp>(loc, matmulType.getShape(), elementType)
+          tensor::EmptyOp::create(builder, loc, matmulType.getShape(), elementType)
               .getResult();
       auto init =
           linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -661,8 +657,7 @@ static Value matrixMultiply(RewriterBase &rewriter, Location loc,
       {inputShape[0] * inputShape[1],
        inputShape[2] * inputShape[3] * inputShape[4], filterShape[3]},
       outputElementType);
-  Value empty = rewriter
-                    .create<tensor::EmptyOp>(loc, matmulType.getShape(),
+  Value empty = tensor::EmptyOp::create(rewriter, loc, matmulType.getShape(),
                                              outputElementType)
                     .getResult();
   Value zero = arith::ConstantOp::create(
@@ -782,8 +777,7 @@ Value outputTransform(RewriterBase &rewriter, Location loc, Value value,
       auto matmulType = RankedTensorType::get({retRows, valueW}, elementType);
       Value init = outInitVal;
       if (rightTransform || scalarFactor != 1) {
-        auto empty = builder
-                         .create<tensor::EmptyOp>(loc, matmulType.getShape(),
+        auto empty = tensor::EmptyOp::create(builder, loc, matmulType.getShape(),
                                                   elementType)
                          .getResult();
         init = linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -802,8 +796,7 @@ Value outputTransform(RewriterBase &rewriter, Location loc, Value value,
           RankedTensorType::get({retRows, AMatrix.cols}, elementType);
       Value init = outInitVal;
       if (scalarFactor != 1) {
-        auto empty = builder
-                         .create<tensor::EmptyOp>(loc, matmulType.getShape(),
+        auto empty = tensor::EmptyOp::create(builder, loc, matmulType.getShape(),
                                                   elementType)
                          .getResult();
         init = linalg::FillOp::create(builder, loc, zero, empty).getResult(0);
@@ -827,8 +820,7 @@ Value outputTransform(RewriterBase &rewriter, Location loc, Value value,
           AffineMap::get(2, 0, context), identityAffineMap, identityAffineMap};
 
       matmulRetValue =
-          rewriter
-              .create<linalg::GenericOp>(
+          linalg::GenericOp::create(rewriter,
                   loc, matmulType,
                   ValueRange{scalarFactorValue, matmulRetValue},
                   ValueRange{outInitVal}, affineMaps,
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 4e9f93b9cae6f..1a3f972a43fce 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -372,8 +372,7 @@ SmallVector<Value> vector::getAsValues(OpBuilder &builder, Location loc,
   llvm::transform(foldResults, std::back_inserter(values),
                   [&](OpFoldResult foldResult) {
                     if (auto attr = dyn_cast<Attribute>(foldResult))
-                      return builder
-                          .create<arith::ConstantIndexOp>(
+                      return arith::ConstantIndexOp::create(builder,
                               loc, cast<IntegerAttr>(attr).getInt())
                           .getResult();
 
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorGather.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorGather.cpp
index 2484670c39caa..e062f55f87679 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorGather.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorGather.cpp
@@ -248,11 +248,10 @@ struct Gather1DToConditionalLoads : OpRewritePattern<vector::GatherOp> {
         scf::YieldOp::create(b, loc, result);
       };
 
-      result =
-          rewriter
-              .create<scf::IfOp>(loc, condition, /*thenBuilder=*/loadBuilder,
+      result = scf::IfOp::create(rewriter, loc, condition,
+                                 /*thenBuilder=*/loadBuilder,
                                  /*elseBuilder=*/passThruBuilder)
-              .getResult(0);
+                   .getResult(0);
     }
 
     rewriter.replaceOp(op, result);
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
index e9109322ed3d8..4baeb1145d25b 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
@@ -142,8 +142,8 @@ struct TransferReadPermutationLowering
 
     // Transpose result of transfer_read.
     SmallVector<int64_t> transposePerm(permutation.begin(), permutation.end());
-    return rewriter
-        .create<vector::TransposeOp>(op.getLoc(), newRead, transposePerm)
+    return vector::TransposeOp::create(rewriter, op.getLoc(), newRead,
+                                       transposePerm)
         .getResult();
   }
 };
@@ -371,8 +371,8 @@ struct TransferOpReduceRank
         rewriter, op.getLoc(), newReadType, op.getBase(), op.getIndices(),
         AffineMapAttr::get(newMap), op.getPadding(), op.getMask(),
         newInBoundsAttr);
-    return rewriter
-        .create<vector::BroadcastOp>(op.getLoc(), originalVecType, newRead)
+    return vector::BroadcastOp::create(rewriter, op.getLoc(), originalVecType,
+                                       newRead)
         .getVector();
   }
 };
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
index 58e94ea00189f..bb0f339a26e43 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
@@ -451,10 +451,9 @@ struct WarpOpTransferWrite : public WarpDistributionPattern {
     }
     SmallVector<Value> delinearized;
     if (map.getNumResults() > 1) {
-      delinearized = rewriter
-                         .create<mlir::affine::AffineDelinearizeIndexOp>(
-                             newWarpOp.getLoc(), newWarpOp.getLaneid(),
-                             delinearizedIdSizes)
+      delinearized = mlir::affine::AffineDelinearizeIndexOp::create(
+                         rewriter, newWarpOp.getLoc(), newWarpOp.getLaneid(),
+                         delinearizedIdSizes)
                          .getResults();
     } else {
       // If there is only one map result, we can elide the delinearization
@@ -1538,19 +1537,18 @@ struct WarpOpInsertScalar : public WarpDistributionPattern {
         arith::CmpIOp::create(rewriter, loc, arith::CmpIPredicate::eq,
                               newWarpOp.getLaneid(), insertingLane);
     Value newResult =
-        rewriter
-            .create<scf::IfOp>(
-                loc, isInsertingLane,
-                /*thenBuilder=*/
-                [&](OpBuilder &builder, Location loc) {
-                  Value newInsert = vector::InsertOp::create(
-                      builder, loc, newSource, distributedVec, newPos);
-                  scf::YieldOp::create(builder, loc, newInsert);
-                },
-                /*elseBuilder=*/
-                [&](OpBuilder &builder, Location loc) {
-                  scf::YieldOp::create(builder, loc, distributedVec);
-                })
+        scf::IfOp::create(
+            rewriter, loc, isInsertingLane,
+            /*thenBuilder=*/
+            [&](OpBuilder &builder, Location loc) {
+              Value newInsert = vector::InsertOp::create(
+                  builder, loc, newSource, distributedVec, newPos);
+              scf::YieldOp::create(builder, loc, newInsert);
+            },
+            /*elseBuilder=*/
+            [&](OpBuilder &builder, Location loc) {
+              scf::YieldOp::create(builder, loc, distributedVec);
+            })
             .getResult(0);
     rewriter.replaceAllUsesWith(newWarpOp->getResult(operandNumber), newResult);
     return success();
@@ -1661,10 +1659,9 @@ struct WarpOpInsert : public WarpDistributionPattern {
       auto nonInsertingBuilder = [&](OpBuilder &builder, Location loc) {
         scf::YieldOp::create(builder, loc, distributedDest);
       };
-      newResult = rewriter
-                      .create<scf::IfOp>(loc, isInsertingLane,
-                                         /*thenBuilder=*/insertingBuilder,
-                                         /*elseBuilder=*/nonInsertingBuilder)
+      newResult = scf::IfOp::create(rewriter, loc, isInsertingLane,
+                                    /*thenBuilder=*/insertingBuilder,
+                                    /*elseBuilder=*/nonInsertingBuilder)
                       .getResult(0);
     }
 
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
index 73388a5da3e4f..9889d7f221fe6 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
@@ -466,9 +466,9 @@ mlir::vector::castAwayContractionLeadingOneDim(vector::ContractionOp contractOp,
     newOp = mlir::vector::maskOperation(rewriter, newOp, newMask);
   }
 
-  return rewriter
-      .create<vector::BroadcastOp>(loc, contractOp->getResultTypes()[0],
-                                   newOp->getResults()[0])
+  return vector::BroadcastOp::create(rewriter, loc,
+                                     contractOp->getResultTypes()[0],
+                                     newOp->getResults()[0])
       .getResult();
 }
 
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorEmulateNarrowType.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorEmulat...
[truncated]

@makslevental makslevental requested a review from chencha3 July 25, 2025 17:18
Copy link

github-actions bot commented Jul 25, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@makslevental makslevental force-pushed the makslevental/update-create-32n branch from e91af0b to 828bd57 Compare July 25, 2025 17:21
@makslevental makslevental requested a review from kuhar July 25, 2025 17:47
@makslevental makslevental merged commit fcbcfe4 into llvm:main Jul 25, 2025
9 checks passed
@makslevental makslevental deleted the makslevental/update-create-32n branch July 25, 2025 18:50
jpienaar added a commit that referenced this pull request Jul 26, 2025
Taken from git history:

9e7834c Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (35/n) (#150708)
284a5c2 Maksim Levental [mlir][NFC] update `mlir/examples` create APIs (31/n) (#150652)
c090ed5 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (33/n) (#150659)
fcbcfe4 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (32/n) (#150657)
258daf5 Maksim Levental [mlir][NFC] update `mlir` create APIs (34/n) (#150660)
c610b24 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (27/n) (#150638)
b58ad36 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (30/n) (#150643)
258d04c Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (28/n) (#150641)
a6bf40d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (29/n) (#150642)
dcfc853 Maksim Levental [mlir][NFC] update `flang/lib` create APIs (12/n) (#149914)
3f74334 Maksim Levental [mlir][NFC] update `flang` create APIs (13/n) (#149913)
a636b7b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (18/n) (#149925)
75aa706 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (17/n) (#149924)
2f53125 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (15/n) (#149921)
967626b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (14/n) (#149920)
588845d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (20/n) (#149927)
b043492 Maksim Levental [mlir][NFC] update `Conversion` create APIs (4/n) (#149879)
8fff238 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (23/n) (#149930)
38976a0 Maksim Levental [mlir][NFC] update `Conversion` create APIs (7/n) (#149889)
eaa67a3 Maksim Levental [mlir][NFC] update `Conversion` create APIs (5/n) (#149887)
b0312be Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (19/n) (#149926)
2736fbd Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (26/n) (#149933)
4ae9fdc Maksim Levental [mlir][NFC] update `Conversion` create APIs (6/n) (#149888)
f904cdd Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (24/n) (#149931)
972ac59 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (21/n) (#149928)
7b78796 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (25/n) (#149932)
c3823af Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (22/n) (#149929)
dce6679 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (16/n) (#149922)
9844ba6 Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder` create APIs (9/n) (#149917)
5547c6c Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder/Runtime` create APIs (10/n) (#149916)
a3a007a Maksim Levental [mlir][NFC] update `flang/Lower` create APIs (8/n) (#149912)
46f6df0 Maksim Levental [mlir][NFC] update `flang/Optimizer/Transforms` create APIs (11/n)  (#149915)
b7e332d Maksim Levental [mlir][NFC] update `include` create APIs (3/n) (#149687)
6056f94 Maksim Levental [mlir][NFC] update LLVM create APIs (2/n) (#149667)
906295b Maksim Levental [mlir] update affine+arith create APIs (1/n) (#149656)
jpienaar added a commit that referenced this pull request Jul 26, 2025
The update is most likely not what someone wants when looking at the
blame for one of these lines.

Taken from git history:

```
9e7834c Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (35/n) (#150708)
284a5c2 Maksim Levental [mlir][NFC] update `mlir/examples` create APIs (31/n) (#150652)
c090ed5 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (33/n) (#150659)
fcbcfe4 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (32/n) (#150657)
258daf5 Maksim Levental [mlir][NFC] update `mlir` create APIs (34/n) (#150660)
c610b24 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (27/n) (#150638)
b58ad36 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (30/n) (#150643)
258d04c Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (28/n) (#150641)
a6bf40d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (29/n) (#150642)
dcfc853 Maksim Levental [mlir][NFC] update `flang/lib` create APIs (12/n) (#149914)
3f74334 Maksim Levental [mlir][NFC] update `flang` create APIs (13/n) (#149913)
a636b7b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (18/n) (#149925)
75aa706 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (17/n) (#149924)
2f53125 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (15/n) (#149921)
967626b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (14/n) (#149920)
588845d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (20/n) (#149927)
b043492 Maksim Levental [mlir][NFC] update `Conversion` create APIs (4/n) (#149879)
8fff238 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (23/n) (#149930)
38976a0 Maksim Levental [mlir][NFC] update `Conversion` create APIs (7/n) (#149889)
eaa67a3 Maksim Levental [mlir][NFC] update `Conversion` create APIs (5/n) (#149887)
b0312be Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (19/n) (#149926)
2736fbd Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (26/n) (#149933)
4ae9fdc Maksim Levental [mlir][NFC] update `Conversion` create APIs (6/n) (#149888)
f904cdd Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (24/n) (#149931)
972ac59 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (21/n) (#149928)
7b78796 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (25/n) (#149932)
c3823af Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (22/n) (#149929)
dce6679 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (16/n) (#149922)
9844ba6 Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder` create APIs (9/n) (#149917)
5547c6c Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder/Runtime` create APIs (10/n) (#149916)
a3a007a Maksim Levental [mlir][NFC] update `flang/Lower` create APIs (8/n) (#149912)
46f6df0 Maksim Levental [mlir][NFC] update `flang/Optimizer/Transforms` create APIs (11/n)  (#149915)
b7e332d Maksim Levental [mlir][NFC] update `include` create APIs (3/n) (#149687)
6056f94 Maksim Levental [mlir][NFC] update LLVM create APIs (2/n) (#149667)
906295b Maksim Levental [mlir] update affine+arith create APIs (1/n) (#149656)
```
mahesh-attarde pushed a commit to mahesh-attarde/llvm-project that referenced this pull request Jul 28, 2025
mahesh-attarde pushed a commit to mahesh-attarde/llvm-project that referenced this pull request Jul 28, 2025
The update is most likely not what someone wants when looking at the
blame for one of these lines.

Taken from git history:

```
9e7834c Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (35/n) (llvm#150708)
284a5c2 Maksim Levental [mlir][NFC] update `mlir/examples` create APIs (31/n) (llvm#150652)
c090ed5 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (33/n) (llvm#150659)
fcbcfe4 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (32/n) (llvm#150657)
258daf5 Maksim Levental [mlir][NFC] update `mlir` create APIs (34/n) (llvm#150660)
c610b24 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (27/n) (llvm#150638)
b58ad36 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (30/n) (llvm#150643)
258d04c Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (28/n) (llvm#150641)
a6bf40d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (29/n) (llvm#150642)
dcfc853 Maksim Levental [mlir][NFC] update `flang/lib` create APIs (12/n) (llvm#149914)
3f74334 Maksim Levental [mlir][NFC] update `flang` create APIs (13/n) (llvm#149913)
a636b7b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (18/n) (llvm#149925)
75aa706 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (17/n) (llvm#149924)
2f53125 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (15/n) (llvm#149921)
967626b Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (14/n) (llvm#149920)
588845d Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (20/n) (llvm#149927)
b043492 Maksim Levental [mlir][NFC] update `Conversion` create APIs (4/n) (llvm#149879)
8fff238 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (23/n) (llvm#149930)
38976a0 Maksim Levental [mlir][NFC] update `Conversion` create APIs (7/n) (llvm#149889)
eaa67a3 Maksim Levental [mlir][NFC] update `Conversion` create APIs (5/n) (llvm#149887)
b0312be Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (19/n) (llvm#149926)
2736fbd Maksim Levental [mlir][NFC] update `mlir/lib` create APIs (26/n) (llvm#149933)
4ae9fdc Maksim Levental [mlir][NFC] update `Conversion` create APIs (6/n) (llvm#149888)
f904cdd Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (24/n) (llvm#149931)
972ac59 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (21/n) (llvm#149928)
7b78796 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (25/n) (llvm#149932)
c3823af Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (22/n) (llvm#149929)
dce6679 Maksim Levental [mlir][NFC] update `mlir/Dialect` create APIs (16/n) (llvm#149922)
9844ba6 Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder` create APIs (9/n) (llvm#149917)
5547c6c Maksim Levental [mlir][NFC] update `flang/Optimizer/Builder/Runtime` create APIs (10/n) (llvm#149916)
a3a007a Maksim Levental [mlir][NFC] update `flang/Lower` create APIs (8/n) (llvm#149912)
46f6df0 Maksim Levental [mlir][NFC] update `flang/Optimizer/Transforms` create APIs (11/n)  (llvm#149915)
b7e332d Maksim Levental [mlir][NFC] update `include` create APIs (3/n) (llvm#149687)
6056f94 Maksim Levental [mlir][NFC] update LLVM create APIs (2/n) (llvm#149667)
906295b Maksim Levental [mlir] update affine+arith create APIs (1/n) (llvm#149656)
```
ajaden-codes pushed a commit to Jaddyen/llvm-project that referenced this pull request Jul 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants