Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix BroadcastInDimOps linalg lowering with compatible dim=1 #2583

Merged
merged 1 commit into from
Oct 8, 2024

Conversation

FruitClover
Copy link
Contributor

For the case of simple broadcast check that we don't have to extend dimentions, so skip convertion to BroadcastOp when only one of them is 1.

Error before:

test.mlir:5:8: error: 'linalg.broadcast' op input dim 1 should match init dim 2. input: 1, init: 5
  %0 = stablehlo.broadcast_in_dim %arg, dims = [1, 2, 3] : (tensor<3x1x1xf32>) -> tensor<1x3x5x7xf32>
       ^
test.mlir:5:8: note: see current operation:
%1 = "linalg.broadcast"(%arg0, %0) <{dimensions = array<i64: 0>}> ({
^bb0(%arg1: f32, %arg2: f32):
  "linalg.yield"(%arg1) : (f32) -> ()
}) : (tensor<3x1x1xf32>, tensor<1x3x5x7xf32>) -> tensor<1x3x5x7xf32>

Copy link

google-cla bot commented Oct 8, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@GleasonK
Copy link
Member

GleasonK commented Oct 8, 2024

To clarify the difference between simple broadcast / not simple broadcast, it's that extending rank is OK, but for dims where rank is not extended the dim sizes must match?

simple: tensor<3xf32> -> tensor<1x2x3xf32>
not simple: tensor<3x1xf32> -> tensor<1x3x3xf32>

@GleasonK GleasonK self-requested a review October 8, 2024 20:30
@FruitClover
Copy link
Contributor Author

To clarify the difference between simple broadcast / not simple broadcast, it's that extending rank is OK, but for dims where rank is not extended the dim sizes must match?

simple: tensor<3xf32> -> tensor<1x2x3xf32>
not simple: tensor<3x1xf32> -> tensor<1x3x3xf32>

basically yes, as i understood from from BroadcastOp semantics

@GleasonK GleasonK merged commit 46d1468 into openxla:main Oct 8, 2024
10 checks passed
For the case of simple broadcast check that we don't have to extend dimentions,
so skip convertion to BroadcastOp when only one of them is 1.

Error before:

    test.mlir:5:8: error: 'linalg.broadcast' op input dim 1 should match init dim 2. input: 1, init: 5
      %0 = stablehlo.broadcast_in_dim %arg, dims = [1, 2, 3] : (tensor<3x1x1xf32>) -> tensor<1x3x5x7xf32>
           ^
    test.mlir:5:8: note: see current operation:
    %1 = "linalg.broadcast"(%arg0, %0) <{dimensions = array<i64: 0>}> ({
    ^bb0(%arg1: f32, %arg2: f32):
      "linalg.yield"(%arg1) : (f32) -> ()
    }) : (tensor<3x1x1xf32>, tensor<1x3x5x7xf32>) -> tensor<1x3x5x7xf32>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants