Skip to content

[mlir][Transforms] Dialect conversion: Add missing "else if" branch #101148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

matthias-springer
Copy link
Member

@matthias-springer matthias-springer commented Jul 30, 2024

This code got lost in #97213 and there was no test for it. Add it back with an MLIR test.

When a pattern is run without a type converter, we can assume that the new block argument types of a signature conversion are legal. That's because they were specified by the user. This won't work for 1->N conversions due to limitations in the dialect conversion infrastructure, so the original FIXME has to stay in place.

This code got lost in #97213 and there was no test for it. Add it back with an MLIR test.

When a pattern is run without a type converter, we can assume that the new block arugments of a signature conversion are legal.
Copy link
Member

@kuhar kuhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I verified this fixes #97213 (comment).

@matthias-springer matthias-springer merged commit 8fc3294 into main Jul 30, 2024
8 checks passed
@matthias-springer matthias-springer deleted the users/matthias-springer/pass_through_new_type branch July 30, 2024 14:36
@@ -1328,15 +1328,19 @@ Block *ConversionPatternRewriterImpl::applySignatureConversion(
mapping.map(origArg, argMat);
appendRewrite<ReplaceBlockArgRewrite>(block, origArg);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing that I noticed while debugging around here, the line 1329 are adding to the map and adding a pending block argument rewrite, and so is line 1349 below. Also the builtin.unresolve_conversion_cast generated in line 1325 is overriden by the one generated in line 1345 in some cases (with the mapping changed as well). I dont think that was intended. Might just be a harmless issue, but might be hiding a bug there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two mapping steps: origArg -> argMat -> targetMat. Note that the mapping is not overwritten here. In the second step, we map argMat, not origArg.

And we also generate two unrealized_conversion_cast ops. Such casts cannot be folded, unless they type(origArg) == type(targetMat).

It was actually on purpose that two casts in a row are generated here. The first one is an argument materialization and the second one is target materialization. Depending on the configuration of the type converter, we may not generate unrealized_conversion_cast, but custom ops.

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

We got a failing test where it now fails to convert a type. It seems we are exactly hitting the case where no converter is specified. The test continues to work if I apply this patch:

--- a/mlir/lib/Transforms/Utils/DialectConversion.cpp
+++ b/mlir/lib/Transforms/Utils/DialectConversion.cpp
@@ -1325,11 +1325,11 @@ Block *ConversionPatternRewriterImpl::ap
     Value argMat = buildUnresolvedMaterialization(
         MaterializationKind::Argument, newBlock, newBlock->begin(),
         origArg.getLoc(), /*inputs=*/replArgs, origArgType, converter);
-    mapping.map(origArg, argMat);
-    appendRewrite<ReplaceBlockArgRewrite>(block, origArg);
 
     Type legalOutputType;
     if (converter) {
+      mapping.map(origArg, argMat);
+      appendRewrite<ReplaceBlockArgRewrite>(block, origArg);
       legalOutputType = converter->convertType(origArgType);
     } else if (replArgs.size() == 1) {
       // When there is no type converter, assume that the new block argument
@@ -1340,6 +1340,8 @@ Block *ConversionPatternRewriterImpl::ap
       // case, we currently use the original block argument type (produced by
       // the argument materialization).
       legalOutputType = replArgs[0].getType();
+      mapping.map(origArg, replArgs[0]);
+      appendRewrite<ReplaceBlockArgRewrite>(block, origArg);
     }
     if (legalOutputType && legalOutputType != origArgType) {
       Value targetMat = buildUnresolvedTargetMaterialization(

Does this patch look reasonable? I tried to look at the earlier code that you tried to restore here, and it seems the mapping stuff is still missing, so I tried adding it back.

@matthias-springer
Copy link
Member Author

That could work, but we should then probably build the argument materialization only when there is a type converter. When there is none, and the number of replArgs is 1, directly take that argument and do not build any materializations. But I'd like to understand first why the current implementation does not work. I don't see any conceptual problem with it.

Can you provide a reproducer that can run with upstream MLIR? We really have to improve our test coverage of the dialect conversion framework.

@matthias-springer
Copy link
Member Author

Also, what kind of failure are you seeing? Is it a crash?

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

No, not a crash, it just does not convert where before it did. One of the failing tests is this one:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/tests/lower-static-tensor-list.mlir#L317

@matthias-springer
Copy link
Member Author

I do not have a TensorFlow setup right now. Can you post the previous IR and the new IR? Maybe I can tell from there what's going on.

Also, this is the kind of change that I had in mind: #101318. Can you check if that fixes the test? Even if it does, we still have to understand why where the problem is. And add a test to upstream MLIR that can run without TensorFlow.

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Confirmed, your patch also makes sure that the test still passes. Here is the IR in the faling case:

module { 
  func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> { 
    %cst = arith.constant dense<3> : tensor<1xi32> 
    %cst_0 = arith.constant dense<0> : tensor<i32> 
    %cst_1 = arith.constant dense<-1> : tensor<i32> 
    %0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>> 
    %1:2 = "tf.While"(%cst_0, %0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>) 
    %2 = "tf.TensorListStack"(%1#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32> 
    return %2 : tensor<2x3xf32> 
  }
} 

Before:

func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> {
  %cst = arith.constant dense<3> : tensor<1xi32>
  %cst_0 = arith.constant dense<0> : tensor<i32>
  %cst_1 = arith.constant dense<-1> : tensor<i32>
  %0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>
  %1:2 = "tf.While"(%cst_0, %0) {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"], body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>)
  %2 = "tf.TensorListStack"(%1#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>
  func.return %2 : tensor<2x3xf32>
}

As far as I can tell, it stays the same. But the test was written with the expectation that it changes the type of the while to (tensor, tensor<2x3xf32>) -> (tensor, tensor<2x3xf32>)

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Full expected output:

func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> { 
  %cst = arith.constant dense<3> : tensor<1xi32> 
  %cst_0 = arith.constant dense<0> : tensor<i32> 
  %cst_1 = arith.constant dense<-1> : tensor<i32> 
  %0:2 = "tf.While"(%cst_0, %arg0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> : (tensor<i32>, tensor<2x3xf32>) -> (tensor<i32>, tensor<2x3xf32>) 
  return %0#1 : tensor<2x3xf32> 
} 

@matthias-springer
Copy link
Member Author

matthias-springer commented Jul 31, 2024

Which one of these listings is the one produced with top-of-tree MLIR?

Are you saying that #101318 makes the test pass, but it passes by coincidence?

@matthias-springer
Copy link
Member Author

In the first two listings, it looks like the tf.While op was not modified. At least all the types stay the same. I'm wondering if the pattern that changes the op is successfully applied. (-debug would show that.)

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Agree, in the first two listings it looks like nothing was modified. This starts happening with the revision from the PR. The second listing is with your #101318 patched in, but this also matches the behavior from before this PR.
Not sure whether I used the right debug flag (I passed the flag -log-actions-to=- to tf-opt). Here is the debug output for the case where the test fails and nothing is changed.

[thread tf-opt] begins (no breakpoint) Action `pass-execution` running `LowerStaticTensorListPass` on Operation `builtin.module` (module {...})` 
           2: [thread tf-opt] completed `pass-execution` 
           3: module { 
           4: } 
           5:  
           6: // ----- 
           7: [thread tf-opt] begins (no breakpoint) Action `pass-execution` running `LowerStaticTensorListPass` on Operation `builtin.module` (module {...})` 
           8: [thread tf-opt] begins (no breakpoint) Action `apply-pattern pattern: mlir::(anonymous namespace)::ConvertTensorListFromTensor (%0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>)` 
           9: [thread tf-opt] completed `apply-pattern` 
          10: [thread tf-opt] begins (no breakpoint) Action `apply-pattern pattern: mlir::(anonymous namespace)::ConvertWhile (%1:2 = "tf.While"(%cst_0, %0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>))` 
          11: [thread tf-opt] completed `apply-pattern` 
          12: [thread tf-opt] begins (no breakpoint) Action `apply-pattern pattern: mlir::(anonymous namespace)::ConvertTensorListStack (%3 = "tf.TensorListStack"(%2#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>)` 
          13: [thread tf-opt] completed `apply-pattern` 
          14: [thread tf-opt] begins (no breakpoint) Action `apply-pattern pattern: mlir::(anonymous namespace)::ConvertTensorListLength (%2 = "tf.TensorListLength"(<<UNKNOWN SSA VALUE>>) : (tensor<!tf_type.variant>) -> tensor<i32>)` 
          15: [thread tf-opt] completed `apply-pattern` 
          16: [thread tf-opt] begins (no breakpoint) Action `apply-pattern pattern: mlir::(anonymous namespace)::ConvertIdentity (%7 = "tf.Identity"(<<UNKNOWN SSA VALUE>>) : (tensor<!tf_type.variant>) -> tensor<!tf_type.variant>)` 
          17: [thread tf-opt] completed `apply-pattern` 
          18: [thread tf-opt] completed `pass-execution` 

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Also we have other internal failing tests, it seems they would also be fixed with your #101318

@matthias-springer
Copy link
Member Author

The second listing is with your #101318 patched in, but this also matches the behavior from before this PR.

Sorry, I'm still confused.

So the expected behavior is this: tf.While has result types (tensor<i32>, tensor<2x3xf32>).

With #101148, the result type is (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>).

With #101318 (on top of #101148), the result type is also (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>), but the test is passing now. (But it shouldn't pass.)

Is that accurate?

With -debug (which is a flag of mlir-opt), we would see output such as this:

//===-------------------------------------------===//
Legalizing operation : 'test.signature_conversion_no_converter'(0x62ba6364c0c0) {
  * Fold {
  } -> FAILURE : unable to fold

  * Pattern : 'test.signature_conversion_no_converter -> ()' {
Trying to match "(anonymous namespace)::TestTestSignatureConversionNoConverter"

So we can see what patterns are being applied. I was hoping that there is a similar flag for tf-opt. Without this output, we have no clue what's going on.

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Sorry, I think my comment was confusing. When I mentioned that the test passes, I meant that it produces the same IR as before #101148. This is with #101318 on top of #101148.
What I pasted as the failing case was without your new patch.

@matthias-springer
Copy link
Member Author

matthias-springer commented Jul 31, 2024

Also we have other internal failing tests, it seems they would also be fixed with your #101318

So is this a solution that would fix everything?

We still need some kind of reproducer to understand what's going on here (and to prevent this from breaking again in the future; assuming that it's actually broken). Can you copy together some patterns and IR (only the part that's needed to reproduce this) from your code base that triggers the issue? Maybe not the tf.While one but another one that's a bit simpler.

I'd like to put a test in TestPatterns.cpp and test-legalize-type-conversion.mlir.

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

I only spot-checked some failures, and those would be fixed. I can double-check whether it fixes everything.

I got the -debug flag working, forgot to pass --copt=-UNDEBUG. Here is some relevant part of the output:

//===-------------------------------------------===//
Legalizing operation : 'tf.TensorListFromTensor'(0x71153effd9b0) {
  %7 = "tf.TensorListFromTensor"(%arg4, %4) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>

  * Fold {
ImplicitTypeIDRegistry::lookupOrInsert(mlir::OpTrait::HasRecursiveMemoryEffects<Empty>)
  } -> FAILURE : unable to fold

  * Pattern : 'tf.TensorListFromTensor -> ()' {
Trying to match "mlir::(anonymous namespace)::ConvertTensorListFromTensor"
    ** Replace : 'tf.TensorListFromTensor'(0x71153effd9b0)
"mlir::(anonymous namespace)::ConvertTensorListFromTensor" result 1
  } -> SUCCESS : pattern applied successfully
// *** IR Dump After Pattern Application ***
func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> {
  %cst = arith.constant dense<3> : tensor<1xi32>
  %cst_0 = arith.constant dense<0> : tensor<i32>
  %cst_1 = arith.constant dense<-1> : tensor<i32>
  %0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>
  %1:2 = "tf.While"(%cst_0, %0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>)
  %2 = "tf.TensorListStack"(%1#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>
  return %2 : tensor<2x3xf32>
}


} -> SUCCESS
//===-------------------------------------------===//

//===-------------------------------------------===//
Legalizing operation : 'tf.While'(0x71153effbb40) {
  %8:2 = "tf.While"(%5, %7) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>)

  * Fold {
  } -> FAILURE : unable to fold

  * Pattern : 'tf.While -> ()' {
Trying to match "mlir::(anonymous namespace)::ConvertWhile"
ImplicitTypeIDRegistry::lookupOrInsert(mlir::TF::detail::WhileOpGenericAdaptorBase::Properties)
    ** Insert  : 'tf.While'(0x71153effbd00)
    ** Insert Block into : 'func.func'(0x71153ee4bd80)
    ** Insert Block into : 'func.func'(0x71153ee4be00)
    ** Replace : 'tf.While'(0x71153effbb40)
"mlir::(anonymous namespace)::ConvertWhile" result 1

    //===-------------------------------------------===//
    Legalizing operation : 'func.func'(0x71153ee4bd80) {
    } -> SUCCESS : operation marked legal by the target
    //===-------------------------------------------===//

    //===-------------------------------------------===//
    Legalizing operation : 'func.func'(0x71153ee4be00) {
    } -> SUCCESS : operation marked legal by the target
    //===-------------------------------------------===//

    //===-------------------------------------------===//
    Legalizing operation : 'func.func'(0x71153ee4bd80) {
    } -> SUCCESS : operation marked legal by the target
    //===-------------------------------------------===//

    //===-------------------------------------------===//
    Legalizing operation : 'func.func'(0x71153ee4be00) {
    } -> SUCCESS : operation marked legal by the target
    //===-------------------------------------------===//

    //===-------------------------------------------===//
    Legalizing operation : 'tf.While'(0x71153effbd00) {
      %12:2 = "tf.While"(%9, %arg4) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> : (tensor<i32>, tensor<2x3xf32>) -> (tensor<i32>, tensor<2x3xf32>)

    } -> SUCCESS : operation marked legal by the target
    //===-------------------------------------------===//
  } -> SUCCESS : pattern applied successfully
// *** IR Dump After Pattern Application ***
func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> {
  %cst = arith.constant dense<3> : tensor<1xi32>
  %cst_0 = arith.constant dense<0> : tensor<i32>
  %cst_1 = arith.constant dense<-1> : tensor<i32>
  %0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>
  %1:2 = "tf.While"(%cst_0, %arg0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> : (tensor<i32>, tensor<2x3xf32>) -> (tensor<i32>, tensor<2x3xf32>)
  %2:2 = "tf.While"(%cst_0, %0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>)
  %3 = "tf.TensorListStack"(%2#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>
  return %3 : tensor<2x3xf32>
}


} -> SUCCESS
//===-------------------------------------------===//

//===-------------------------------------------===//
Legalizing operation : 'tf.TensorListStack'(0x71153ef98e80) {
  %14 = "tf.TensorListStack"(%13#1, %10) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>

  * Fold {
  } -> FAILURE : unable to fold

  * Pattern : 'tf.TensorListStack -> ()' {
Trying to match "mlir::(anonymous namespace)::ConvertTensorListStack"
    ** Replace : 'tf.TensorListStack'(0x71153ef98e80)
"mlir::(anonymous namespace)::ConvertTensorListStack" result 1
  } -> SUCCESS : pattern applied successfully
// *** IR Dump After Pattern Application ***
func.func @tensorlistWhileLoop(%arg0: tensor<2x3xf32>) -> tensor<2x3xf32> {
  %cst = arith.constant dense<3> : tensor<1xi32>
  %cst_0 = arith.constant dense<0> : tensor<i32>
  %cst_1 = arith.constant dense<-1> : tensor<i32>
  %0 = "tf.TensorListFromTensor"(%arg0, %cst) : (tensor<2x3xf32>, tensor<1xi32>) -> tensor<!tf_type.variant<tensor<3xf32>>>
  %1:2 = "tf.While"(%cst_0, %arg0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> : (tensor<i32>, tensor<2x3xf32>) -> (tensor<i32>, tensor<2x3xf32>)
  %2:2 = "tf.While"(%cst_0, %0) <{body = @tensorlistWhileBody, cond = @tensorlistWhileCond, is_stateless = false}> {T = ["tfdtype$DT_INT32", "tfdtype$DT_VARIANT"]} : (tensor<i32>, tensor<!tf_type.variant<tensor<3xf32>>>) -> (tensor<i32>, tensor<!tf_type.variant<tensor<*xf32>>>)
  %3 = "tf.TensorListStack"(%2#1, %cst_1) : (tensor<!tf_type.variant<tensor<*xf32>>>, tensor<i32>) -> tensor<2x3xf32>
  return %3 : tensor<2x3xf32>
}


} -> SUCCESS
//===-------------------------------------------===//

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Confirmed, all tests would be fixed with #101318

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

Passing over to @anlunx to provide a reproducer.

@matthias-springer
Copy link
Member Author

Yes, that's the log that I was looking for. This is with #101318? It would be interesting to see what's happening with top-of-tree MLIR.

@akuegel
Copy link
Member

akuegel commented Jul 31, 2024

This was without #101318, so the case where the test fails. I notice there are now two whiles, one of them being the one we actually want, but somehow in the final IR we will have only the other one.

@matthias-springer
Copy link
Member Author

It is possible that the pattern succeeds at first, but then it rolls back. So we should take a look at the entire output.

@MaskRay
Copy link
Member

MaskRay commented Jul 31, 2024

(Apologies, I am upgrading internal LLVM without knowing the MLIR stuff going on.)

Would #101318 be merged soon? If not and this PR introduced regression, could this PR be reverted temporarily?

@matthias-springer
Copy link
Member Author

We don't know yet if this is a regression or incorrect API usage in your code base. So far, I can't see any problem with this commit.

(1) If this is a regression: Merge #101318, what's still missing is a test case.

(2) If this is not a regression: Fix broken code.

I know this is blocking your LLVM integrate, but this commit fixes a bug that blocked another project's (IREE) LLVM integrate, so rolling back is not ideal.

Ideally, I'd like to merge #101318 only with a test case. Our test coverage of the dialect conversion framework in MLIR is not good, and that's the reason why we have breakages like this one. So I'd say the next step is to write an MLIR-only reproducer. Then I can merge this first thing tomorrow morning. Waiting for @akuegel or @anlunx...

@MaskRay
Copy link
Member

MaskRay commented Jul 31, 2024

We don't know yet if this is a regression or incorrect API usage in your code base. So far, I can't see any problem with this commit.

(1) If this is a regression: Merge #101318, what's still missing is a test case.

(2) If this is not a regression: Fix broken code.

I know this is blocking your LLVM integrate, but this commit fixes a bug that blocked another project's (IREE) LLVM integrate, so rolling back is not ideal.

Ideally, I'd like to merge #101318 only with a test case. Our test coverage of the dialect conversion framework in MLIR is not good, and that's the reason why we have breakages like this one. So I'd say the next step is to write an MLIR-only reproducer. Then I can merge this first thing tomorrow morning. Waiting for @akuegel or @anlunx...

The patch will fix the failures I observed.

I hope that @akuegel or @anlunx will provide a test case later but I am going to merge #101318 now...

@akuegel
Copy link
Member

akuegel commented Aug 1, 2024

@matthias-springer It turns out it was incorrect usage of the API. A workaround was added for the issue you were fixing in this PR. When removing the workaround, the tests pass. Sorry for wasting your time, and thanks for all the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants