Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on reduction with empty axes and scalar input #740

Closed
philloooo opened this issue Jul 23, 2024 · 5 comments · Fixed by #741
Closed

Question on reduction with empty axes and scalar input #740

philloooo opened this issue Jul 23, 2024 · 5 comments · Fixed by #741
Assignees

Comments

@philloooo
Copy link
Contributor

In MLReduceOptions

  • If empty, no dimensions are reduced, and the shape of the output tensor is the same as the shape of the input tensor.

Does that mean it should be an no-op (a.k.a identity)?

Is that true when you specifies empty axes for scalar input too?

If that's the case, our WPTs are wrong - it's clearly still reducing?

@inexorabletash
Copy link
Member

inexorabletash commented Jul 23, 2024

Not identity - some of the reductions apply a function to each input value before reduction, or the reduced value after reduction. So there's no dimension reduction for a scalar, but the pre/post function still applies.

  • L1, L2, Max, Min, Mean, Product, Sum - these should be identity for a scalar input.
  • LogSum - same as log(n)
  • LogSumExp - same as log(exp(n)) (which is identity except for precision loss?)
  • SumSquare - same as pow(n, 2)

@philloooo
Copy link
Contributor Author

So if you specify empty axes for none scalar input, do these pre/post function apply too?
builder.reduceSumSquare([2,2,2,2], {axes: []}) -> [4,4,4,4]?

Or you only apply these functions for empty axes and scalar input case?

@inexorabletash
Copy link
Member

inexorabletash commented Jul 23, 2024

Pre/post functions are always applied.

So for reduceSumSquare, pre function is "square", reduction is "sum", post function is "identity".

reduceSumSquare([2,2,2,2]) → 16 - each input squared, all dimensions reduced
reduceSumSquare([2,2,2,2], {axes: []}) → [4,4,4,4] - each input squared, no dimensions reduced
reduceSumSquare([2]) → 4 - each input squared, all dimensions reduced
reduceSumSquare([2], {axes: []}) → [4] - each input squared, no dimensions reduced
reduceSumSquare(2) → 4 - each input squared, all dimensions reduced (but scalar so that's a no-op)
reduceSumSquare(2, {axes: []}) → 4 - each input squared, no dimensions reduced

For reduceLogSumExp, pre function is "exp", reduction is "sum", post function is "log"

reduceLogSumExp([2,2,2,2]) → 3.386... - each input exp'd, all dimensions reduced, sum log'd
reduceLogSumExp([2,2,2,2], {axes: []}) → [2,2,2,2] - each input exp'd, no dimensions reduced, each value log'd
reduceLogSumExp([2]) → 2 - each input exp'd, all dimensions reduced, sum log'd
reduceLogSumExp([2], {axes: []}) → [2] - each input exp'd, no dimensions reduced, each value log'd
reduceLogSumExp(2) → 2 - each input exp'd, all dimensions reduced (no-op), sum log'd
reduceLogSumExp(2, {axes: []}) → 2 - each input exp'd, no dimensions reduced, each value log'd

@inexorabletash
Copy link
Member

... but all of this implies that the spec text could be improved. Reducing a dimension is different from applying the reduction operation to an input, so writing "no dimensions are reduced" is unclear. And behavior for scalars should be called out, since those don't have a dimension to reduce.

inexorabletash added a commit to inexorabletash/webnn that referenced this issue Jul 24, 2024
Improve the description of the reduction ops:

- Give in-line definition for L1/L2.
- Make "input values" slightly clearer.
- Explicitly note that if the input is a scalar, so is the output.
- State how the reduction function is applied for empty axes.
- Standardize on "reduction operation" not "reduce operation".

Fixes webmachinelearning#740
@inexorabletash inexorabletash self-assigned this Jul 24, 2024
@fdwr
Copy link
Collaborator

fdwr commented Jul 30, 2024

So there's no dimension reduction for a scalar, but the pre/post function still applies.

👍

Pre/post functions are always applied.

So for reduceSumSquare, pre function is "square", reduction is "sum", post function is "identity".

reduceSumSquare([2,2,2,2]) → 16 - each input squared, all dimensions reduced reduceSumSquare([2,2,2,2], {axes: []}) → [4,4,4,4] - each input squared, no dimensions reduced reduceSumSquare([2]) → 4 - each input squared, all dimensions reduced reduceSumSquare([2], {axes: []}) → [4] - each input squared, no dimensions reduced reduceSumSquare(2) → 4 - each input squared, all dimensions reduced (but scalar so that's a no-op) reduceSumSquare(2, {axes: []}) → 4 - each input squared, no dimensions reduced

For reduceLogSumExp, pre function is "exp", reduction is "sum", post function is "log"

reduceLogSumExp([2,2,2,2]) → 3.386... - each input exp'd, all dimensions reduced, sum log'd reduceLogSumExp([2,2,2,2], {axes: []}) → [2,2,2,2] - each input exp'd, no dimensions reduced, each value log'd reduceLogSumExp([2]) → 2 - each input exp'd, all dimensions reduced, sum log'd reduceLogSumExp([2], {axes: []}) → [2] - each input exp'd, no dimensions reduced, each value log'd reduceLogSumExp(2) → 2 - each input exp'd, all dimensions reduced (no-op), sum log'd reduceLogSumExp(2, {axes: []}) → 2 - each input exp'd, no dimensions reduced, each value log'd

Great examples.

@fdwr fdwr closed this as completed in #741 Jul 30, 2024
@fdwr fdwr closed this as completed in 4697a0d Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants