-
Notifications
You must be signed in to change notification settings - Fork 55
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Naive register <-> tmem load/store support (#3786)
Extracted from #3755 to make code review easy. This PR adds a new unit test `TMemTest.GmemRegTMemRegGmemCopy` that schedules a copy kernel gmem -> register -> tmem -> register -> gmem, and update our system with the minimum required changes to make this test pass. The purpose of this PR is not to provide a good implementation of TMem support, but just to provide the absolute minimal requirement for us to start. Limitations are: 1. The index is hard coded zero, so this PR is not touching the interesting topic of "how to schedule TMem tensor?" 2. The TMem is used without allocation. Using a memory that is not allocated is clearly a wrong way to program, but as described in the code comment, if a fusion only has one TMem TensorView, it is guaranteed to work. Generated code: ```CUDA __global__ void nvfuser_none_f0_c0_r0_g0(Tensor<float, 1, 1> T0, Tensor<float, 1, 1> T4) { nvfuser_index_t i0; i0 = ((nvfuser_index_t)threadIdx.x) + (32 * ((nvfuser_index_t)blockIdx.x)); bool b1; b1 = i0 < T0.logical_size[0LL]; Array<float, 1, 1> T1; T1[0] = 0; if (b1) { T1[0] = T0[((T0.alloc_stride[0LL] * ((nvfuser_index_t)threadIdx.x)) + ((32 * T0.alloc_stride[0LL]) * ((nvfuser_index_t)blockIdx.x)))]; } asm volatile( "tcgen05.st.sync.aligned.32x32b.x1.b32 [%0], {%1};\n" : :"r"(0U), "f"((*reinterpret_cast<Array<float, 1, 1>*>(&T1[0]))[0]) ); asm volatile("tcgen05.wait::st.sync.aligned;\n"); Array<float, 1, 1> T3; asm( "tcgen05.ld.sync.aligned.32x32b.x1.b32 {%0}, [%1];\n" :"=f"((*reinterpret_cast<Array<float, 1, 1>*>(&T3[0]))[0]) :"r"(0U) ); asm volatile("tcgen05.wait::ld.sync.aligned;\n"); if (b1) { T4[i0] = T3[0]; } } ```
- Loading branch information
Showing
14 changed files
with
258 additions
and
18 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
// clang-format off | ||
/* | ||
* SPDX-FileCopyrightText: Copyright (c) 2023-present NVIDIA CORPORATION & AFFILIATES. | ||
* All rights reserved. | ||
* SPDX-License-Identifier: BSD-3-Clause | ||
*/ | ||
// clang-format on | ||
|
||
#include <device_lower/analysis/tensor_memory.h> | ||
#include <fusion.h> | ||
#include <ir/all_nodes.h> | ||
|
||
namespace nvfuser { | ||
|
||
TensorMemoryInfo computeTMemInfo(Fusion* fusion) { | ||
bool found = false; | ||
for (auto tv : fusion->allTvs()) { | ||
if (tv->getMemoryType() == MemoryType::Tensor) { | ||
NVF_ERROR(!found, "Only one tensor on TMem is supported"); | ||
found = true; | ||
} | ||
} | ||
return {}; | ||
} | ||
|
||
} // namespace nvfuser |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
// clang-format off | ||
/* | ||
* SPDX-FileCopyrightText: Copyright (c) 2023-present NVIDIA CORPORATION & AFFILIATES. | ||
* All rights reserved. | ||
* SPDX-License-Identifier: BSD-3-Clause | ||
*/ | ||
// clang-format on | ||
#pragma once | ||
|
||
namespace nvfuser { | ||
|
||
class Fusion; | ||
|
||
// Information used to lower tensor memory. So far, there is no information | ||
// needed, the computeTMemInfo just check that there is only one tensor on TMem | ||
// in the fusion. This limitation is described in the note below, and it is only | ||
// for incremental development. This limitation will be removed soon in the | ||
// future. | ||
struct TensorMemoryInfo; | ||
TensorMemoryInfo computeTMemInfo(Fusion* fusion); | ||
|
||
// Note: [Tensor Memory Allocation] | ||
// | ||
// Tensor memory is a very special memory, so its allocation is also very | ||
// different from other memory types. | ||
// | ||
// It is highly recommended to read the PTX documentation for tensor memory | ||
// if you are not alreay familiar with it: | ||
// https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#tensor-memory | ||
// | ||
// The first thing to note is, TMem does not have virtualization. This means: | ||
// We can not just allocate starting from address 0 like how we allocate shared | ||
// memory, and rely on page table to translate the same virtual address of | ||
// different CTA to different physical address. There is no virtual TMem | ||
// address. All addresses are physical addresses. | ||
// | ||
// Because multiple CTAs can execute on the same SM simultaneously, there must | ||
// be some handshaking mechanism for each CTA to know the region of TMem that it | ||
// can use. This is done by using the PTX instruction tcgen05.alloc. To ensure | ||
// safety, there is a mutex "I have the right to allocate TMem" in the | ||
// hardware. At the beginning of each CTA, the CTA will try to acquire the mutex | ||
// automatically. If it fails, the CTA will be blocked until the mutex is free. | ||
// This means, only one CTA can allocate TMem at a time. Once the CTA has | ||
// finished allocating TMem, it should release the mutex to relinquish the right | ||
// to allocate. After the right to allocate is relinquished, this CTA can not | ||
// allocate new TMem any more, but it can still access the TMem that it has | ||
// allocated, and it can also free the TMem that it has allocated. Once one CTA | ||
// relinquishes the right to allocate, the next CTA that is blocked will be | ||
// unblocked and can acquire the mutex to allocate TMem. | ||
// | ||
// Currently, the TMem allocation is not supported in nvFuser. We currently only | ||
// allow one TensorView to be on TMem, and because we never relinquish the right | ||
// to allocate TMem, CTA will be serialized on SM. A new CTA can be scheduled on | ||
// an SM only after the previous CTA on that SM has completely finished | ||
// executing. Thanks to this serialization, we can just skip allocating and | ||
// think that our only TMem TensorView own the entire TMem, because we are sure | ||
// that there will not be another CTA using that address. As a result, we could | ||
// just provide address 0 to our instructions that access TMem. In principle, it | ||
// is clearly wrong to write to an address that is not allocated, but because we | ||
// are sure that it will in practice work for the specific unit test that we are | ||
// targeting, we just do it so we have incremental development. | ||
|
||
struct TensorMemoryInfo {}; | ||
|
||
} // namespace nvfuser |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.