-
Notifications
You must be signed in to change notification settings - Fork 15.1k
[InferAddressSpaces] Handle unconverted ptrmask #140802
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -16,6 +16,7 @@ | |
|
|
||
| #include "llvm/Analysis/ScalarEvolutionExpressions.h" | ||
| #include "llvm/Analysis/TargetTransformInfo.h" | ||
| #include "llvm/Analysis/ValueTracking.h" | ||
| #include "llvm/Analysis/VectorUtils.h" | ||
| #include "llvm/IR/DataLayout.h" | ||
| #include "llvm/IR/GetElementPtrTypeIterator.h" | ||
|
|
@@ -151,6 +152,52 @@ class TargetTransformInfoImplBase { | |
| } | ||
|
|
||
| virtual bool isNoopAddrSpaceCast(unsigned, unsigned) const { return false; } | ||
|
|
||
| virtual std::pair<KnownBits, KnownBits> | ||
| computeKnownBitsAddrSpaceCast(unsigned ToAS, const Value &PtrOp) const { | ||
| const Type *PtrTy = PtrOp.getType(); | ||
| assert(PtrTy->isPtrOrPtrVectorTy() && | ||
| "expected pointer or pointer vector type"); | ||
| unsigned FromAS = PtrTy->getPointerAddressSpace(); | ||
|
|
||
| if (DL.isNonIntegralAddressSpace(FromAS)) | ||
ro-i marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| return std::pair(KnownBits(DL.getPointerSizeInBits(FromAS)), | ||
| KnownBits(DL.getPointerSizeInBits(ToAS))); | ||
|
|
||
| KnownBits FromPtrBits; | ||
| if (const AddrSpaceCastInst *CastI = dyn_cast<AddrSpaceCastInst>(&PtrOp)) { | ||
| std::pair<KnownBits, KnownBits> KB = computeKnownBitsAddrSpaceCast( | ||
| CastI->getDestAddressSpace(), *CastI->getPointerOperand()); | ||
| FromPtrBits = KB.second; | ||
| } else if (FromAS == 0 && | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be great if we could avoid adding new checks for AS==0. I think this can easily be handled even without waiting for #131557, by adding something like a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks! |
||
| PatternMatch::match(&PtrOp, PatternMatch::m_Zero())) { | ||
| // For addrspace 0, we know that a null pointer has the value 0. | ||
| FromPtrBits = KnownBits::makeConstant( | ||
| APInt::getZero(DL.getPointerSizeInBits(FromAS))); | ||
| } else { | ||
| FromPtrBits = computeKnownBits(&PtrOp, DL, nullptr); | ||
| } | ||
|
|
||
| KnownBits ToPtrBits = | ||
| computeKnownBitsAddrSpaceCast(FromAS, ToAS, FromPtrBits); | ||
|
|
||
| return std::pair(FromPtrBits, ToPtrBits); | ||
ro-i marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| } | ||
|
|
||
| virtual KnownBits | ||
| computeKnownBitsAddrSpaceCast(unsigned FromAS, unsigned ToAS, | ||
| const KnownBits &FromPtrBits) const { | ||
| unsigned ToASBitSize = DL.getPointerSizeInBits(ToAS); | ||
|
|
||
| if (DL.isNonIntegralAddressSpace(FromAS)) | ||
| return KnownBits(ToASBitSize); | ||
|
|
||
| // By default, we assume that all valid "larger" (e.g. 64-bit) to "smaller" | ||
| // (e.g. 32-bit) casts work by chopping off the high bits. | ||
| // By default, we do not assume that null results in null again. | ||
| return FromPtrBits.anyextOrTrunc(ToASBitSize); | ||
| } | ||
|
|
||
| virtual bool | ||
| canHaveNonUndefGlobalInitializerInAddressSpace(unsigned AS) const { | ||
| return AS == 0; | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do these two functions have to be in TTI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't that the right place? Targets can override the semantic that way if they like. And since the function is split into two parts, it's easier to override only the part you need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, yes and no. The no part is that I don't see any target-dependent part here that can't be handled with information from the data layout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The addrspacecast. For example, the case where I cast larger address to smaller address, I just trunc. This might be valid for all practical cases (given integral), but it's in no way guaranteed afaiu.
Also, the anyext for smaller->larger casts is a conservative choice that could be improved with target knowledge, ig
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I'd just leave the cast between two address spaces part into TTI, which makes sense, and then leave no default implementation. Even for us, with GAS, AS5->AS0 is not zero extension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I already replaced the zext by an anyext in a previous commit, so I don't assume anything specific about the casts afaics. Except that larger->smaller casts truncate. But that may be too much to assume, either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at least in general. For amdgpu, this assumption has already been in the original code, so ig I can add an override for computeKnownBitsAddrSpaceCast in AMDGPUTargetTransformInfo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best option would be to add a case for
Instruction::AddrSpaceCasttocomputeKnownBitsFromOperatorinValueTracking.cpp. But since we cannot access TTI functions from there, ig that's not an option.This means, however, that
computeKnownBitsAddrSpaceCastshould stay recursive. Otherwise, addrspacecasts as source values will never be handled correctly and I don't think the user ofcomputeKnownBitsAddrSpaceCastshould be required to build a loop around it?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ping. If there is not really a good solution, ig the best option would be to remove the target implementation of computeKnownBits for AddrSpaceCast that Matt suggested? So that we can at least fix the original bug our fuzzer detected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't a better solution now, without pulling a dependence on TTI into ValueTracking, which is a bigger change.
Really TTI doesn't feel like the right place for this. It belongs in some kind of other target-IR-information that doesn't depend on codegen, but we do not have such a place today