You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this case, we are able to inline the mma operation that consumes these two tensors, but we check that the unmapped IDs 5, 6, 13, and 14 are Broadcast and that the operation is an MmaOp.
In the case of grid swizzling by a factor 4, we will do some further scheduling here. For example we will have
Now we have mixed the first two outer dimensions with this swizzle and what used to be a simple split of a loop broadcast (bS5) is now an iteration ID iS22{4} resulting from the merge.
I am not sure yet how to address this. I don't think we can just inline here without some other changes since when I disable this check I get errors in expression sorting.
The text was updated successfully, but these errors were encountered:
This updates the default (non-plugin) matmul heuristic to support Hopper
matmuls. This change means that we can not run matmuls on Hopper
similarly to how we do it on Ampere and Turing, including using the
Python interface.
I tried to make the default heuristic somewhat thoughtful and not just a
placeholder. Here are some notes about the Hopper heuristic in its
current form:
- I set the macro to Hopper_64_64_16. I intended to always use the
largest macro for which the N size divided the problem's N, but this led
to lower perf on the handful of examples I looked at. We should
benchmark more and find out why this is once we have warp specialization
and register stealing fully plumbed in, but for the time being I simply
left it at N=64.
- Once the instruction tile is set we set the warp tile equal to the
instruction tile (we can revisit this in the future). Then to find the
CTA tile we double the instruction tile in the M or N dimension until we
run out of registers.
- We start with 8 circular buffering stages and decrease until the
circular buffers fit into smem.
- We use `use_smem_epilogue` when possible. Whenever that is possible we
_always_ use `promote_prologue_smem_reuse` even if it's not needed. This
is to try and avoid bugs like #3602.
- I set the tile rasterization order so that the fast axis is the axis
with the fewest tiles, which should encourage more L2 hits unless there
are tons of tiles in each dimension.
- I cannot yet set grid swizzling due to #3671, but I placed a TODO
comment and some code to do the proper swizzling.
---------
Co-authored-by: Ryan Spring <[email protected]>
The inlining logic for
MmaOp
withAxisMapping
checks that unmapped dimensions areBroadcast
. We expect to have something like thisIn this case, we are able to inline the mma operation that consumes these two tensors, but we check that the unmapped IDs 5, 6, 13, and 14 are
Broadcast
and that the operation is anMmaOp
.In the case of grid swizzling by a factor 4, we will do some further scheduling here. For example we will have
Now we have mixed the first two outer dimensions with this swizzle and what used to be a simple split of a loop broadcast (
bS5
) is now an iteration IDiS22{4}
resulting from the merge.I am not sure yet how to address this. I don't think we can just inline here without some other changes since when I disable this check I get errors in expression sorting.
The text was updated successfully, but these errors were encountered: