Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert ONNX models during import and run import in parallel. #90

Merged
merged 4 commits into from
Mar 5, 2024

Conversation

ScottTodd
Copy link
Member

@ScottTodd ScottTodd commented Mar 4, 2024

Fixes #85

This converts ONNX models to version 17+ as part of importing them, based on the expectation that users will also be able to convert their input programs using the ONNX Version Converter. To make importing test cases faster, this also changes the import script to run in parallel by default.

Comment on lines 1 to 8
module {
func.func @test_upsample_nearest(%arg0: !torch.vtensor<[1,1,2,2],f32>, %arg1: !torch.vtensor<[4],f32>) -> !torch.vtensor<[1,1,4,6],f32> attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 9 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
func.func @test_upsample_nearest(%arg0: !torch.vtensor<[1,1,2,2],f32>, %arg1: !torch.vtensor<[4],f32>) -> !torch.vtensor<[1,1,4,6],f32> attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
%none = torch.constant.none
%0 = torch.operator "onnx.Upsample"(%arg0, %arg1) {torch.onnx.mode = "nearest"} : (!torch.vtensor<[1,1,2,2],f32>, !torch.vtensor<[4],f32>) -> !torch.vtensor<[1,1,4,6],f32>
return %0 : !torch.vtensor<[1,1,4,6],f32>
%0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<[0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00, 1.000000e+00, 1.000000e+00, 1.000000e+00]> : tensor<8xf32>} : () -> !torch.vtensor<[8],f32>
%1 = torch.operator "onnx.Resize"(%arg0, %0, %arg1) {torch.onnx.mode = "nearest"} : (!torch.vtensor<[1,1,2,2],f32>, !torch.vtensor<[8],f32>, !torch.vtensor<[4],f32>) -> !torch.vtensor<[1,1,4,6],f32>
return %1 : !torch.vtensor<[1,1,4,6],f32>
}
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an example where the converter did more than just change the torch.onnx_meta.opset_version.

@ScottTodd ScottTodd merged commit 0005329 into nod-ai:main Mar 5, 2024
1 check passed
@ScottTodd ScottTodd deleted the iree-import-updates branch March 5, 2024 22:18
renxida pushed a commit that referenced this pull request Jul 18, 2024
Fixes #85 

This converts ONNX models to version 17+ as part of importing them,
based on the expectation that users will also be able to convert their
input programs using the ONNX Version Converter. To make importing test
cases faster, this also changes the import script to run in parallel by
default.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider standardizing ONNX versions for generated test cases
1 participant