-
Notifications
You must be signed in to change notification settings - Fork 14.4k
[NVPTX] Add NVPTXIncreaseAligmentPass to improve vectorization #144958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,157 @@ | ||
//===-- NVPTXIncreaseAlignment.cpp - Increase alignment for local arrays --===// | ||
// | ||
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. | ||
// See https://llvm.org/LICENSE.txt for license information. | ||
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception | ||
// | ||
//===----------------------------------------------------------------------===// | ||
// | ||
// A simple pass that looks at local memory arrays that are statically | ||
// sized and potentially increases their alignment. This enables vectorization | ||
// of loads/stores to these arrays if not explicitly specified by the client. | ||
// | ||
// TODO: Ideally we should do a bin-packing of local arrays to maximize | ||
// alignments while minimizing holes. | ||
// | ||
//===----------------------------------------------------------------------===// | ||
|
||
#include "NVPTX.h" | ||
#include "llvm/Analysis/TargetTransformInfo.h" | ||
#include "llvm/IR/DataLayout.h" | ||
#include "llvm/IR/Instructions.h" | ||
#include "llvm/IR/Module.h" | ||
#include "llvm/IR/PassManager.h" | ||
#include "llvm/Pass.h" | ||
#include "llvm/Support/CommandLine.h" | ||
#include "llvm/Support/MathExtras.h" | ||
#include "llvm/Support/NVPTXAddrSpace.h" | ||
|
||
using namespace llvm; | ||
|
||
static cl::opt<bool> | ||
MaxLocalArrayAlignment("nvptx-use-max-local-array-alignment", | ||
cl::init(false), cl::Hidden, | ||
cl::desc("Use maximum alignment for local memory")); | ||
Comment on lines
+32
to
+34
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We may as well allow the option to specify the exact alignment, instead of a boolean knob with a vague max value. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The way this knob works now, I don't think it can be expressed in terms of an exact alignment. This option controls whether we conservatively use the maximum "safe" alignment (an alignment that is a multiple of the aggregate size to avoid introducing new holes), or the maximum "useful" alignment (as big an alignment as possible without going past the limits of what we can load/store in a single instruction). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK. How about the upper limit on the alignment. E.g. the pass may want to align float3 by 16, but the user wants to avoid/minimize the gaps and limit the alignment of 4 or 8? |
||
|
||
static Align getMaxLocalArrayAlignment(const TargetTransformInfo &TTI) { | ||
const unsigned MaxBitWidth = | ||
TTI.getLoadStoreVecRegBitWidth(NVPTXAS::ADDRESS_SPACE_LOCAL); | ||
return Align(MaxBitWidth / 8); | ||
} | ||
|
||
namespace { | ||
struct NVPTXIncreaseLocalAlignment { | ||
const Align MaxAlign; | ||
|
||
NVPTXIncreaseLocalAlignment(const TargetTransformInfo &TTI) | ||
: MaxAlign(getMaxLocalArrayAlignment(TTI)) {} | ||
|
||
bool run(Function &F); | ||
bool updateAllocaAlignment(AllocaInst *Alloca, const DataLayout &DL); | ||
Align getAggressiveArrayAlignment(unsigned ArraySize); | ||
Align getConservativeArrayAlignment(unsigned ArraySize); | ||
}; | ||
} // namespace | ||
|
||
/// Get the maximum useful alignment for an array. This is more likely to | ||
/// produce holes in the local memory. | ||
/// | ||
/// Choose an alignment large enough that the entire array could be loaded with | ||
/// a single vector load (if possible). Cap the alignment at | ||
/// MaxPTXArrayAlignment. | ||
Align NVPTXIncreaseLocalAlignment::getAggressiveArrayAlignment( | ||
const unsigned ArraySize) { | ||
return std::min(MaxAlign, Align(PowerOf2Ceil(ArraySize))); | ||
} | ||
|
||
/// Get the alignment of arrays that reduces the chances of leaving holes when | ||
/// arrays are allocated within a contiguous memory buffer (like shared memory | ||
/// and stack). Holes are still possible before and after the array allocation. | ||
/// | ||
/// Choose the largest alignment such that the array size is a multiple of the | ||
/// alignment. If all elements of the buffer are allocated in order of | ||
/// alignment (higher to lower) no holes will be left. | ||
Align NVPTXIncreaseLocalAlignment::getConservativeArrayAlignment( | ||
const unsigned ArraySize) { | ||
return commonAlignment(MaxAlign, ArraySize); | ||
} | ||
|
||
/// Find a better alignment for local arrays | ||
bool NVPTXIncreaseLocalAlignment::updateAllocaAlignment(AllocaInst *Alloca, | ||
const DataLayout &DL) { | ||
// Looking for statically sized local arrays | ||
if (!Alloca->isStaticAlloca()) | ||
return false; | ||
|
||
const auto ArraySize = Alloca->getAllocationSize(DL); | ||
if (!(ArraySize && ArraySize->isFixed())) | ||
return false; | ||
|
||
const auto ArraySizeValue = ArraySize->getFixedValue(); | ||
const Align PreferredAlignment = | ||
MaxLocalArrayAlignment ? getAggressiveArrayAlignment(ArraySizeValue) | ||
: getConservativeArrayAlignment(ArraySizeValue); | ||
|
||
if (PreferredAlignment > Alloca->getAlign()) { | ||
Alloca->setAlignment(PreferredAlignment); | ||
return true; | ||
} | ||
|
||
return false; | ||
} | ||
|
||
bool NVPTXIncreaseLocalAlignment::run(Function &F) { | ||
bool Changed = false; | ||
const auto &DL = F.getParent()->getDataLayout(); | ||
|
||
BasicBlock &EntryBB = F.getEntryBlock(); | ||
for (Instruction &I : EntryBB) | ||
if (AllocaInst *Alloca = dyn_cast<AllocaInst>(&I)) | ||
Changed |= updateAllocaAlignment(Alloca, DL); | ||
|
||
return Changed; | ||
} | ||
|
||
namespace { | ||
struct NVPTXIncreaseLocalAlignmentLegacyPass : public FunctionPass { | ||
static char ID; | ||
NVPTXIncreaseLocalAlignmentLegacyPass() : FunctionPass(ID) {} | ||
|
||
bool runOnFunction(Function &F) override; | ||
void getAnalysisUsage(AnalysisUsage &AU) const override { | ||
AU.addRequired<TargetTransformInfoWrapperPass>(); | ||
} | ||
StringRef getPassName() const override { | ||
return "NVPTX Increase Local Alignment"; | ||
} | ||
}; | ||
} // namespace | ||
|
||
char NVPTXIncreaseLocalAlignmentLegacyPass::ID = 0; | ||
INITIALIZE_PASS(NVPTXIncreaseLocalAlignmentLegacyPass, | ||
"nvptx-increase-local-alignment", | ||
"Increase alignment for statically sized alloca arrays", false, | ||
false) | ||
|
||
FunctionPass *llvm::createNVPTXIncreaseLocalAlignmentPass() { | ||
return new NVPTXIncreaseLocalAlignmentLegacyPass(); | ||
} | ||
|
||
bool NVPTXIncreaseLocalAlignmentLegacyPass::runOnFunction(Function &F) { | ||
const auto &TTI = getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F); | ||
return NVPTXIncreaseLocalAlignment(TTI).run(F); | ||
} | ||
|
||
PreservedAnalyses | ||
NVPTXIncreaseLocalAlignmentPass::run(Function &F, | ||
FunctionAnalysisManager &FAM) { | ||
const auto &TTI = FAM.getResult<TargetIRAnalysis>(F); | ||
bool Changed = NVPTXIncreaseLocalAlignment(TTI).run(F); | ||
|
||
if (!Changed) | ||
return PreservedAnalyses::all(); | ||
|
||
PreservedAnalyses PA; | ||
PA.preserveSet<CFGAnalyses>(); | ||
return PA; | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5 | ||
; RUN: opt -S -passes=nvptx-increase-local-alignment < %s | FileCheck %s --check-prefixes=COMMON,DEFAULT | ||
; RUN: opt -S -passes=nvptx-increase-local-alignment -nvptx-use-max-local-array-alignment < %s | FileCheck %s --check-prefixes=COMMON,MAX | ||
target triple = "nvptx64-nvidia-cuda" | ||
|
||
define void @test1() { | ||
; COMMON-LABEL: define void @test1() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca i8, align 1 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca i8, align 1 | ||
ret void | ||
} | ||
|
||
define void @test2() { | ||
; DEFAULT-LABEL: define void @test2() { | ||
; DEFAULT-NEXT: [[A:%.*]] = alloca [63 x i8], align 1 | ||
; DEFAULT-NEXT: ret void | ||
; | ||
; MAX-LABEL: define void @test2() { | ||
; MAX-NEXT: [[A:%.*]] = alloca [63 x i8], align 16 | ||
; MAX-NEXT: ret void | ||
; | ||
%a = alloca [63 x i8], align 1 | ||
ret void | ||
} | ||
|
||
define void @test3() { | ||
; COMMON-LABEL: define void @test3() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca [64 x i8], align 16 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca [64 x i8], align 1 | ||
ret void | ||
} | ||
|
||
define void @test4() { | ||
; DEFAULT-LABEL: define void @test4() { | ||
; DEFAULT-NEXT: [[A:%.*]] = alloca i8, i32 63, align 1 | ||
; DEFAULT-NEXT: ret void | ||
; | ||
; MAX-LABEL: define void @test4() { | ||
; MAX-NEXT: [[A:%.*]] = alloca i8, i32 63, align 16 | ||
; MAX-NEXT: ret void | ||
; | ||
%a = alloca i8, i32 63, align 1 | ||
ret void | ||
} | ||
|
||
define void @test5() { | ||
; COMMON-LABEL: define void @test5() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca i8, i32 64, align 16 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca i8, i32 64, align 1 | ||
ret void | ||
} | ||
|
||
define void @test6() { | ||
; COMMON-LABEL: define void @test6() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca i8, align 32 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca i8, align 32 | ||
ret void | ||
} | ||
|
||
define void @test7() { | ||
; COMMON-LABEL: define void @test7() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca i32, align 4 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca i32, align 2 | ||
ret void | ||
} | ||
|
||
define void @test8() { | ||
; COMMON-LABEL: define void @test8() { | ||
; COMMON-NEXT: [[A:%.*]] = alloca [2 x i32], align 8 | ||
; COMMON-NEXT: ret void | ||
; | ||
%a = alloca [2 x i32], align 2 | ||
ret void | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone finds themselves with that much local memory that the holes matter, proper alignment of locals will likely be lost in the noise of their actual performance issues.
If we provide the knob allowing explicitly set the alignment for locals, that would be a sufficient escape hatch to let the user find an acceptable gaps vs alignment trade-off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. I agree it seems like improving most programs is more important then handling these edge cases as well as possible. Do you think I should change the default behavior of the pass to the more aggressive alignment improvement heuristic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the changes that are likely to affect everyone, the typical approach is to introduce it as an optional feature (or enabled only for clear wins), then allow wider testing with more aggressive settings (I can help with that).
If the changes are deemed to be relatively low risk, aggressive defaults + escape hatch to disable it also works.
I think, in this case we're probably OK with aligning aggressively.
In fact, I think it will, accidentally, benefit cutlass (NVIDIA/cutlass#2003 (comment)), which does have the code with a known UB, where it uses local variables and then uses vector loads/stores on the, assuming that they are always aligned. It works in optimized builds where the locals are optimized away, but fails in debug builds.