-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[mlir][IR][NFC] Move free-standing functions to MemRefType
#123465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][IR][NFC] Move free-standing functions to MemRefType
#123465
Conversation
@llvm/pr-subscribers-mlir-memref @llvm/pr-subscribers-mlir-gpu Author: Matthias Springer (matthias-springer) ChangesTurn free-standing Patch is 56.49 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/123465.diff 38 Files Affected:
diff --git a/mlir/include/mlir/Dialect/XeGPU/IR/XeGPUOps.td b/mlir/include/mlir/Dialect/XeGPU/IR/XeGPUOps.td
index 5910aa3f7f2dae..f5cf3dad75d9c2 100644
--- a/mlir/include/mlir/Dialect/XeGPU/IR/XeGPUOps.td
+++ b/mlir/include/mlir/Dialect/XeGPU/IR/XeGPUOps.td
@@ -198,7 +198,7 @@ def XeGPU_CreateNdDescOp: XeGPU_Op<"create_nd_tdesc", [Pure, ViewLikeOpInterface
auto memrefType = llvm::dyn_cast<MemRefType>(getSourceType());
assert(memrefType && "Incorrect use of getStaticStrides");
- auto [strides, offset] = getStridesAndOffset(memrefType);
+ auto [strides, offset] = memrefType.getStridesAndOffset();
// reuse the storage of ConstStridesAttr since strides from
// memref is not persistant
setConstStrides(strides);
diff --git a/mlir/include/mlir/IR/BuiltinTypes.h b/mlir/include/mlir/IR/BuiltinTypes.h
index 19c5361124aacb..df1e02732617d2 100644
--- a/mlir/include/mlir/IR/BuiltinTypes.h
+++ b/mlir/include/mlir/IR/BuiltinTypes.h
@@ -409,33 +409,6 @@ inline bool TensorType::classof(Type type) {
// Type Utilities
//===----------------------------------------------------------------------===//
-/// Returns the strides of the MemRef if the layout map is in strided form.
-/// MemRefs with a layout map in strided form include:
-/// 1. empty or identity layout map, in which case the stride information is
-/// the canonical form computed from sizes;
-/// 2. a StridedLayoutAttr layout;
-/// 3. any other layout that be converted into a single affine map layout of
-/// the form `K + k0 * d0 + ... kn * dn`, where K and ki's are constants or
-/// symbols.
-///
-/// A stride specification is a list of integer values that are either static
-/// or dynamic (encoded with ShapedType::kDynamic). Strides encode
-/// the distance in the number of elements between successive entries along a
-/// particular dimension.
-LogicalResult getStridesAndOffset(MemRefType t,
- SmallVectorImpl<int64_t> &strides,
- int64_t &offset);
-
-/// Wrapper around getStridesAndOffset(MemRefType, SmallVectorImpl<int64_t>,
-/// int64_t) that will assert if the logical result is not succeeded.
-std::pair<SmallVector<int64_t>, int64_t> getStridesAndOffset(MemRefType t);
-
-/// Return a version of `t` with identity layout if it can be determined
-/// statically that the layout is the canonical contiguous strided layout.
-/// Otherwise pass `t`'s layout into `simplifyAffineMap` and return a copy of
-/// `t` with simplified layout.
-MemRefType canonicalizeStridedLayout(MemRefType t);
-
/// Given MemRef `sizes` that are either static or dynamic, returns the
/// canonical "contiguous" strides AffineExpr. Strides are multiplicative and
/// once a dynamic dimension is encountered, all canonical strides become
@@ -458,24 +431,6 @@ AffineExpr makeCanonicalStridedLayoutExpr(ArrayRef<int64_t> sizes,
/// where `exprs` is {d0, d1, .., d_(sizes.size()-1)}
AffineExpr makeCanonicalStridedLayoutExpr(ArrayRef<int64_t> sizes,
MLIRContext *context);
-
-/// Return "true" if the layout for `t` is compatible with strided semantics.
-bool isStrided(MemRefType t);
-
-/// Return "true" if the last dimension of the given type has a static unit
-/// stride. Also return "true" for types with no strides.
-bool isLastMemrefDimUnitStride(MemRefType type);
-
-/// Return "true" if the last N dimensions of the given type are contiguous.
-///
-/// Examples:
-/// - memref<5x4x3x2xi8, strided<[24, 6, 2, 1]> is contiguous when
-/// considering both _all_ and _only_ the trailing 3 dims,
-/// - memref<5x4x3x2xi8, strided<[48, 6, 2, 1]> is _only_ contiguous when
-/// considering the trailing 3 dims.
-///
-bool trailingNDimsContiguous(MemRefType type, int64_t n);
-
} // namespace mlir
#endif // MLIR_IR_BUILTINTYPES_H
diff --git a/mlir/include/mlir/IR/BuiltinTypes.td b/mlir/include/mlir/IR/BuiltinTypes.td
index 4f09d2e41e7ceb..e5a2ae81da0c9a 100644
--- a/mlir/include/mlir/IR/BuiltinTypes.td
+++ b/mlir/include/mlir/IR/BuiltinTypes.td
@@ -808,10 +808,52 @@ def Builtin_MemRef : Builtin_Type<"MemRef", "memref", [
/// Arguments that are passed into the builder must outlive the builder.
class Builder;
+ /// Return "true" if the last N dimensions are contiguous.
+ ///
+ /// Examples:
+ /// - memref<5x4x3x2xi8, strided<[24, 6, 2, 1]> is contiguous when
+ /// considering both _all_ and _only_ the trailing 3 dims,
+ /// - memref<5x4x3x2xi8, strided<[48, 6, 2, 1]> is _only_ contiguous when
+ /// considering the trailing 3 dims.
+ ///
+ bool areTrailingDimsContiguous(int64_t n);
+
+ /// Return a version of this type with identity layout if it can be
+ /// determined statically that the layout is the canonical contiguous
+ /// strided layout. Otherwise pass the layout into `simplifyAffineMap`
+ /// and return a copy of this type with simplified layout.
+ MemRefType canonicalizeStridedLayout();
+
/// [deprecated] Returns the memory space in old raw integer representation.
/// New `Attribute getMemorySpace()` method should be used instead.
unsigned getMemorySpaceAsInt() const;
+ /// Returns the strides of the MemRef if the layout map is in strided form.
+ /// MemRefs with a layout map in strided form include:
+ /// 1. empty or identity layout map, in which case the stride information
+ /// is the canonical form computed from sizes;
+ /// 2. a StridedLayoutAttr layout;
+ /// 3. any other layout that be converted into a single affine map layout
+ /// of the form `K + k0 * d0 + ... kn * dn`, where K and ki's are
+ /// constants or symbols.
+ ///
+ /// A stride specification is a list of integer values that are either
+ /// static or dynamic (encoded with ShapedType::kDynamic). Strides encode
+ /// the distance in the number of elements between successive entries along
+ /// a particular dimension.
+ LogicalResult getStridesAndOffset(SmallVectorImpl<int64_t> &strides,
+ int64_t &offset);
+
+ /// Wrapper around getStridesAndOffset(SmallVectorImpl<int64_t>, int64_t)
+ /// that will assert if the logical result is not succeeded.
+ std::pair<SmallVector<int64_t>, int64_t> getStridesAndOffset();
+
+ /// Return "true" if the layout is compatible with strided semantics.
+ bool isStrided();
+
+ /// Return "true" if the last dimension has a static unit stride. Also
+ /// return "true" for types with no strides.
+ bool isLastDimUnitStride();
}];
let skipDefaultBuilders = 1;
let genVerifyDecl = 1;
diff --git a/mlir/include/mlir/IR/CommonTypeConstraints.td b/mlir/include/mlir/IR/CommonTypeConstraints.td
index 6f52195c1d7c92..64270469cad78d 100644
--- a/mlir/include/mlir/IR/CommonTypeConstraints.td
+++ b/mlir/include/mlir/IR/CommonTypeConstraints.td
@@ -820,7 +820,7 @@ class StaticShapeMemRefOf<list<Type> allowedTypes> :
def AnyStaticShapeMemRef : StaticShapeMemRefOf<[AnyType]>;
// For a MemRefType, verify that it has strides.
-def HasStridesPred : CPred<[{ isStrided(::llvm::cast<::mlir::MemRefType>($_self)) }]>;
+def HasStridesPred : CPred<[{ ::llvm::cast<::mlir::MemRefType>($_self).isStrided() }]>;
class StridedMemRefOf<list<Type> allowedTypes> :
ConfinedType<MemRefOf<allowedTypes>, [HasStridesPred],
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index 250e4a6bbf8dfd..26feaf79149b23 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -514,7 +514,7 @@ MlirLogicalResult mlirMemRefTypeGetStridesAndOffset(MlirType type,
int64_t *offset) {
MemRefType memrefType = llvm::cast<MemRefType>(unwrap(type));
SmallVector<int64_t> strides_;
- if (failed(getStridesAndOffset(memrefType, strides_, *offset)))
+ if (failed(memrefType.getStridesAndOffset(strides_, *offset)))
return mlirLogicalResultFailure();
(void)std::copy(strides_.begin(), strides_.end(), strides);
diff --git a/mlir/lib/Conversion/AMDGPUToROCDL/AMDGPUToROCDL.cpp b/mlir/lib/Conversion/AMDGPUToROCDL/AMDGPUToROCDL.cpp
index 1564e417a7a48e..9c69da9eb0c6e9 100644
--- a/mlir/lib/Conversion/AMDGPUToROCDL/AMDGPUToROCDL.cpp
+++ b/mlir/lib/Conversion/AMDGPUToROCDL/AMDGPUToROCDL.cpp
@@ -192,7 +192,7 @@ struct RawBufferOpLowering : public ConvertOpToLLVMPattern<GpuOp> {
// Construct buffer descriptor from memref, attributes
int64_t offset = 0;
SmallVector<int64_t, 5> strides;
- if (failed(getStridesAndOffset(memrefType, strides, offset)))
+ if (failed(memrefType.getStridesAndOffset(strides, offset)))
return gpuOp.emitOpError("Can't lower non-stride-offset memrefs");
MemRefDescriptor memrefDescriptor(memref);
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 19c3ba1f950202..63f99eb744a83b 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -52,7 +52,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
assert(type.hasStaticShape() && "unexpected dynamic shape");
// Extract all strides and offsets and verify they are static.
- auto [strides, offset] = getStridesAndOffset(type);
+ auto [strides, offset] = type.getStridesAndOffset();
assert(!ShapedType::isDynamic(offset) && "expected static offset");
assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
"expected static strides");
@@ -193,7 +193,7 @@ Value MemRefDescriptor::bufferPtr(OpBuilder &builder, Location loc,
MemRefType type) {
// When we convert to LLVM, the input memref must have been normalized
// beforehand. Hence, this call is guaranteed to work.
- auto [strides, offsetCst] = getStridesAndOffset(type);
+ auto [strides, offsetCst] = type.getStridesAndOffset();
Value ptr = alignedPtr(builder, loc);
// For zero offsets, we already have the base pointer.
diff --git a/mlir/lib/Conversion/LLVMCommon/Pattern.cpp b/mlir/lib/Conversion/LLVMCommon/Pattern.cpp
index d551506485a454..a47a2872ceb073 100644
--- a/mlir/lib/Conversion/LLVMCommon/Pattern.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/Pattern.cpp
@@ -62,7 +62,7 @@ Value ConvertToLLVMPattern::getStridedElementPtr(
Location loc, MemRefType type, Value memRefDesc, ValueRange indices,
ConversionPatternRewriter &rewriter) const {
- auto [strides, offset] = getStridesAndOffset(type);
+ auto [strides, offset] = type.getStridesAndOffset();
MemRefDescriptor memRefDescriptor(memRefDesc);
// Use a canonical representation of the start address so that later
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 64bdb248dff430..0df6502ff4c1fc 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -486,7 +486,7 @@ LLVMTypeConverter::convertFunctionTypeCWrapper(FunctionType type) const {
SmallVector<Type, 5>
LLVMTypeConverter::getMemRefDescriptorFields(MemRefType type,
bool unpackAggregates) const {
- if (!isStrided(type)) {
+ if (!type.isStrided()) {
emitError(
UnknownLoc::get(type.getContext()),
"conversion to strided form failed either due to non-strided layout "
@@ -604,7 +604,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
int64_t offset = 0;
SmallVector<int64_t, 4> strides;
- if (failed(getStridesAndOffset(memrefTy, strides, offset)))
+ if (failed(memrefTy.getStridesAndOffset(strides, offset)))
return false;
for (int64_t stride : strides)
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 86f687d7f2636e..f7542b8b3bc5c7 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -1136,7 +1136,7 @@ struct MemRefReshapeOpLowering
// Extract the offset and strides from the type.
int64_t offset;
SmallVector<int64_t> strides;
- if (failed(getStridesAndOffset(targetMemRefType, strides, offset)))
+ if (failed(targetMemRefType.getStridesAndOffset(strides, offset)))
return rewriter.notifyMatchFailure(
reshapeOp, "failed to get stride and offset exprs");
@@ -1451,7 +1451,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
int64_t offset;
SmallVector<int64_t, 4> strides;
- auto successStrides = getStridesAndOffset(viewMemRefType, strides, offset);
+ auto successStrides = viewMemRefType.getStridesAndOffset(strides, offset);
if (failed(successStrides))
return viewOp.emitWarning("cannot cast to non-strided shape"), failure();
assert(offset == 0 && "expected offset to be 0");
@@ -1560,7 +1560,7 @@ struct AtomicRMWOpLowering : public LoadStoreOpLowering<memref::AtomicRMWOp> {
auto memRefType = atomicOp.getMemRefType();
SmallVector<int64_t> strides;
int64_t offset;
- if (failed(getStridesAndOffset(memRefType, strides, offset)))
+ if (failed(memRefType.getStridesAndOffset(strides, offset)))
return failure();
auto dataPtr =
getStridedElementPtr(atomicOp.getLoc(), memRefType, adaptor.getMemref(),
diff --git a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
index 5b4414d67fdac0..eaefe9e3857933 100644
--- a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
+++ b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
@@ -132,7 +132,7 @@ static std::optional<int64_t> getStaticallyKnownRowStride(ShapedType type) {
return 0;
int64_t offset = 0;
SmallVector<int64_t, 2> strides;
- if (failed(getStridesAndOffset(memrefType, strides, offset)) ||
+ if (failed(memrefType.getStridesAndOffset(strides, offset)) ||
strides.back() != 1)
return std::nullopt;
int64_t stride = strides[strides.size() - 2];
diff --git a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
index d688d8e2ab6588..a1e21cb524bd9a 100644
--- a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
+++ b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
@@ -91,7 +91,7 @@ LogicalResult getMemRefAlignment(const LLVMTypeConverter &typeConverter,
// Check if the last stride is non-unit and has a valid memory space.
static LogicalResult isMemRefTypeSupported(MemRefType memRefType,
const LLVMTypeConverter &converter) {
- if (!isLastMemrefDimUnitStride(memRefType))
+ if (!memRefType.isLastDimUnitStride())
return failure();
if (failed(converter.getMemRefAddressSpace(memRefType)))
return failure();
@@ -1374,7 +1374,7 @@ static std::optional<SmallVector<int64_t, 4>>
computeContiguousStrides(MemRefType memRefType) {
int64_t offset;
SmallVector<int64_t, 4> strides;
- if (failed(getStridesAndOffset(memRefType, strides, offset)))
+ if (failed(memRefType.getStridesAndOffset(strides, offset)))
return std::nullopt;
if (!strides.empty() && strides.back() != 1)
return std::nullopt;
diff --git a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
index 01bc65c841e94c..22bf27d229ce5d 100644
--- a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
+++ b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
@@ -1650,7 +1650,7 @@ struct TransferOp1dConversion : public VectorToSCFPattern<OpTy> {
return failure();
if (xferOp.getVectorType().getRank() != 1)
return failure();
- if (map.isMinorIdentity() && isLastMemrefDimUnitStride(memRefType))
+ if (map.isMinorIdentity() && memRefType.isLastDimUnitStride())
return failure(); // Handled by ConvertVectorToLLVM
// Loop bounds, step, state...
diff --git a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
index 8041bdf7da19b3..d3229d2e912966 100644
--- a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
+++ b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
@@ -76,8 +76,7 @@ static LogicalResult transferPreconditions(PatternRewriter &rewriter,
// Validate further transfer op semantics.
SmallVector<int64_t> strides;
int64_t offset;
- if (failed(getStridesAndOffset(srcTy, strides, offset)) ||
- strides.back() != 1)
+ if (failed(srcTy.getStridesAndOffset(strides, offset)) || strides.back() != 1)
return rewriter.notifyMatchFailure(
xferOp, "Buffer must be contiguous in the innermost dimension");
@@ -105,7 +104,7 @@ createNdDescriptor(PatternRewriter &rewriter, Location loc,
xegpu::TensorDescType descType, TypedValue<MemRefType> src,
Operation::operand_range offsets) {
MemRefType srcTy = src.getType();
- auto [strides, offset] = getStridesAndOffset(srcTy);
+ auto [strides, offset] = srcTy.getStridesAndOffset();
xegpu::CreateNdDescOp ndDesc;
if (srcTy.hasStaticShape()) {
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index 492e4781f57810..dfd5d7e212f2f7 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -129,7 +129,7 @@ static bool staticallyOutOfBounds(OpType op) {
return false;
int64_t offset;
SmallVector<int64_t> strides;
- if (failed(getStridesAndOffset(bufferType, strides, offset)))
+ if (failed(bufferType.getStridesAndOffset(strides, offset)))
return false;
int64_t result = offset + op.getIndexOffset().value_or(0);
if (op.getSgprOffset()) {
diff --git a/mlir/lib/Dialect/AMX/Transforms/LegalizeForLLVMExport.cpp b/mlir/lib/Dialect/AMX/Transforms/LegalizeForLLVMExport.cpp
index 4eac371d4c1ae4..4cb777b03b1963 100644
--- a/mlir/lib/Dialect/AMX/Transforms/LegalizeForLLVMExport.cpp
+++ b/mlir/lib/Dialect/AMX/Transforms/LegalizeForLLVMExport.cpp
@@ -53,8 +53,7 @@ FailureOr<Value> getStride(ConversionPatternRewriter &rewriter,
unsigned bytes = width >> 3;
int64_t offset;
SmallVector<int64_t, 4> strides;
- if (failed(getStridesAndOffset(mType, strides, offset)) ||
- strides.back() != 1)
+ if (failed(mType.getStridesAndOffset(strides, offset)) || strides.back() != 1)
return failure();
if (strides[preLast] == ShapedType::kDynamic) {
// Dynamic stride needs code to compute the stride at runtime.
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index f1841b860ff81a..6be55a1d282240 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -42,8 +42,8 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
auto isGuaranteedCastCompatible = [](MemRefType source, MemRefType target) {
int64_t sourceOffset, targetOffset;
SmallVector<int64_t, 4> sourceStrides, targetStrides;
- if (failed(getStridesAndOffset(source, sourceStrides, sourceOffset)) ||
- failed(getStridesAndOffset(target, targetStrides, targetOffset)))
+ if (failed(source.getStridesAndOffset(sourceStrides, sourceOffset)) ||
+ failed(target.getStridesAndOffset(targetStrides, targetOffset)))
return false;
auto dynamicToStatic = [](int64_t a, int64_t b) {
return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index 2502744cb3f580..ce0f112dc2dd22 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -29,7 +29,7 @@ using MemCpyFn = bufferization::BufferResultsToOutParamsOpts::MemCpyFn;
static bool hasFullyDynamicLayoutMap(MemRefType type) {
int64_t offset;
SmallVector<int64_t, 4> strides;
- if (failed(getStridesAndOffset(type, strides, offset)))
+ if (failed(type.getStridesAndOffset(strides, offset)))
return false;
if (!llvm::all_of(strides, ShapedType::isDynamic))
return false;
diff --git a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp b/mlir/li...
[truncated]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The refactoring makes sense to me but, please, wait for more approvals. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
It would be good for more folks to take a look, so please wait a few days before landing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. Thanks for implementing this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
b68d260
to
339ca34
Compare
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/35/builds/6452 Here is the relevant piece of the build log for the reference
|
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/24/builds/4429 Here is the relevant piece of the build log for the reference
|
There are no llvm reverts/cherry-picks. Bump llvm to llvm/llvm-project@95d993a Bumps stablehlo to openxla/stablehlo@c27ba67 torch-mlir carries forward fixes from llvm/torch-mlir#3982 Additional forward fixes at iree-org/torch-mlir@fd34bc5 Some C++ API changes to `getStridesAndOffset` from llvm/llvm-project#123465 --------- Signed-off-by: Nirvedh Meshram <[email protected]>
There are no llvm reverts/cherry-picks. Bump llvm to llvm/llvm-project@95d993a Bumps stablehlo to openxla/stablehlo@c27ba67 torch-mlir carries forward fixes from llvm/torch-mlir#3982 Additional forward fixes at iree-org/torch-mlir@fd34bc5 Some C++ API changes to `getStridesAndOffset` from llvm/llvm-project#123465 --------- Signed-off-by: Nirvedh Meshram <[email protected]> Signed-off-by: Hyunsung Lee <[email protected]>
Turn free-standing
MemRefType
-related helper functions inBuiltinTypes.h
into member functions.