-
Notifications
You must be signed in to change notification settings - Fork 604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iree-compile doesn't support converting "onnx.Unsqueeze" #16339
Comments
@rsuderman looks like was marked completed but may need more work |
cc @saienduri, llvm/torch-mlir#2601 added a conversion for unsqueeze |
Picking this up. This interferes with one of the test cases I'm making for https://github.com/nod-ai/SHARK-TestSuite/ |
Would you happen to have the |
Here is the program (test case) I'm using: module {
func.func @test_loop13_seq(%arg0: !torch.vtensor<[],si64>, %arg1: !torch.vtensor<[],i1>, %arg2: !torch.list<vtensor<[],f32>>) -> !torch.list<vtensor<[],f32>> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 13 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
%none = torch.constant.none
%0 = torch.operator "onnx.Loop"(%arg0, %arg1, %arg2) : (!torch.vtensor<[],si64>, !torch.vtensor<[],i1>, !torch.list<vtensor<[],f32>>) -> !torch.list<vtensor<[],f32>> {
^bb0(%arg3: !torch.vtensor<[],si64>, %arg4: !torch.vtensor<[],i1>, %arg5: !torch.list<vtensor<[],f32>>):
%1 = torch.operator "onnx.Identity"(%arg4) : (!torch.vtensor<[],i1>) -> !torch.vtensor<[],i1>
%2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00]> : tensor<5xf32>} : () -> !torch.vtensor<[5],f32>
%3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<si64>} : () -> !torch.vtensor<[],si64>
%4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64>
%5 = torch.operator "onnx.Add"(%arg3, %3) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64>
%6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<si64>} : () -> !torch.vtensor<[],si64>
%7 = torch.operator "onnx.Unsqueeze"(%5, %6) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[1],si64>
%8 = torch.operator "onnx.Slice"(%2, %4, %7) : (!torch.vtensor<[5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?],f32>
%9 = torch.operator "onnx.SequenceInsert"(%arg5, %8) : (!torch.list<vtensor<[],f32>>, !torch.vtensor<[?],f32>) -> !torch.list<vtensor<[],f32>>
torch.operator_terminator %1, %9 : !torch.vtensor<[],i1>, !torch.list<vtensor<[],f32>>
}
return %0 : !torch.list<vtensor<[],f32>>
}
} Note that |
@ScottTodd last time i checked with @stellaraccident , it's fine to just edit DefaultDomainQtoZ.cpp and set since to 1 It's been a while since i looked at this but i think i dealt with this by just setting my tiny simple model to have a higher opset, vs the actual solution is DefaultDomainQtoZ.cpp. I later went through and fixed a bunch of them but this one sneaked past. |
@ScottTodd The Onnx model was obtained from https://zenodo.org/records/3733910, thanks. |
I obtained model.onnx from MLPerf in the first step and used
iree-import-onnx model.onnx -o model.mlir
convert successThen I used
iree-compile --iree-input-type=onnx model.mlir --compile-to=input
I've got error like this
I found torch.operator "onnx.Unsqueeze" doesn't support convert in step 2.
On the tracking page, I noticed that the Unsqueeze operation was marked as completed.
nod-ai/SHARK-ModelDev#215
What component(s) does this issue relate to?
MLIR, Compiler
Version information
protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx>=1.15.0->iree-compiler[onnx]) (3.20.3)
installed iree-compiler-20240206.793
The text was updated successfully, but these errors were encountered: