Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iree-compile doesn't support converting "onnx.Unsqueeze" #16339

Open
phoebesv opened this issue Feb 7, 2024 · 7 comments
Open

iree-compile doesn't support converting "onnx.Unsqueeze" #16339

phoebesv opened this issue Feb 7, 2024 · 7 comments
Assignees
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work

Comments

@phoebesv
Copy link
Contributor

phoebesv commented Feb 7, 2024

I obtained model.onnx from MLPerf in the first step and used

  1. iree-import-onnx model.onnx -o model.mlir convert success
    Then I used
  2. iree-compile --iree-input-type=onnx model.mlir --compile-to=input
    I've got error like this
error: failed to legalize operation 'torch.operator' that was explicitly marked illegal
    %393 = torch.operator "onnx.Unsqueeze"(%arg1) {torch.onnx.axes = [1 : si64]} : (!torch.vtensor<[?,384],si64>) -> !torch.vtensor<[?,1,384],si64>
           ^

I found torch.operator "onnx.Unsqueeze" doesn't support convert in step 2.

On the tracking page, I noticed that the Unsqueeze operation was marked as completed.
nod-ai/SHARK-ModelDev#215

What component(s) does this issue relate to?

MLIR, Compiler

Version information

protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx>=1.15.0->iree-compiler[onnx]) (3.20.3)
installed iree-compiler-20240206.793

@phoebesv phoebesv added the bug 🐞 Something isn't working label Feb 7, 2024
@stellaraccident
Copy link
Collaborator

stellaraccident commented Feb 7, 2024

@rsuderman looks like was marked completed but may need more work

@ScottTodd
Copy link
Member

cc @saienduri, llvm/torch-mlir#2601 added a conversion for unsqueeze

@renxida
Copy link
Contributor

renxida commented Feb 13, 2024

Picking this up. This interferes with one of the test cases I'm making for https://github.com/nod-ai/SHARK-TestSuite/

@ScottTodd
Copy link
Member

Would you happen to have the model.mlir on hand for this? I just hit a similar error where my program has torch.onnx_meta.ir_version = 7 but the lowering pattern requires IR version 13 or higher. Unsqueeze has versions in 1, 11, 13, and 21: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Unsqueeze.

@ScottTodd
Copy link
Member

Here is the program (test case) I'm using:

module {
  func.func @test_loop13_seq(%arg0: !torch.vtensor<[],si64>, %arg1: !torch.vtensor<[],i1>, %arg2: !torch.list<vtensor<[],f32>>) -> !torch.list<vtensor<[],f32>> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 13 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
    %none = torch.constant.none
    %0 = torch.operator "onnx.Loop"(%arg0, %arg1, %arg2) : (!torch.vtensor<[],si64>, !torch.vtensor<[],i1>, !torch.list<vtensor<[],f32>>) -> !torch.list<vtensor<[],f32>> {
    ^bb0(%arg3: !torch.vtensor<[],si64>, %arg4: !torch.vtensor<[],i1>, %arg5: !torch.list<vtensor<[],f32>>):
      %1 = torch.operator "onnx.Identity"(%arg4) : (!torch.vtensor<[],i1>) -> !torch.vtensor<[],i1> 
      %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00]> : tensor<5xf32>} : () -> !torch.vtensor<[5],f32> 
      %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
      %4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
      %5 = torch.operator "onnx.Add"(%arg3, %3) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
      %6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
      %7 = torch.operator "onnx.Unsqueeze"(%5, %6) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[1],si64> 
      %8 = torch.operator "onnx.Slice"(%2, %4, %7) : (!torch.vtensor<[5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?],f32> 
      %9 = torch.operator "onnx.SequenceInsert"(%arg5, %8) : (!torch.list<vtensor<[],f32>>, !torch.vtensor<[?],f32>) -> !torch.list<vtensor<[],f32>> 
      torch.operator_terminator %1, %9 : !torch.vtensor<[],i1>, !torch.list<vtensor<[],f32>>
    }
    return %0 : !torch.list<vtensor<[],f32>>
  }
}

Note that Identity has a similar issue - the pattern requires a minimum version of 14, but the op has versions in 21, 19, 16, 14, 13, 1 (https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity)

@renxida
Copy link
Contributor

renxida commented Mar 1, 2024

@ScottTodd last time i checked with @stellaraccident , it's fine to just edit DefaultDomainQtoZ.cpp and set since to 1

It's been a while since i looked at this but i think i dealt with this by just setting my tiny simple model to have a higher opset, vs the actual solution is DefaultDomainQtoZ.cpp. I later went through and fixed a bunch of them but this one sneaked past.

@phoebesv
Copy link
Contributor Author

phoebesv commented Mar 7, 2024

@ScottTodd The Onnx model was obtained from https://zenodo.org/records/3733910, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working integrations/onnx ONNX integration work
Projects
None yet
Development

No branches or pull requests

5 participants