-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[RFC] AtomicPerByte (aka "atomic memcpy") #3301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
cc @ojeda |
This could mention the |
With some way for the language to be able to express "this type is valid for any bit pattern", which project safe transmute presumably will provide (and that exists in the ecosystem as This would also require removing the safe That's extra complexity, but means that with some help from the ecosystem/future stdlib work, this can be used in 100% safe code, if the data is fine with being torn. |
The "uninit" part of not without the fabled and legendary Freeze Intrinsic anyway. |
On the other hand, |
note that LLVM already implements this operation: |
The trouble with that intrinsic is that |
- In order for this to be efficient, we need an additional intrinsic hooking into | ||
special support in LLVM. (Which LLVM needs to have anyway for C++.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you plan to implement this until LLVM implements this?
I don't think it is necessary to explain the implementation details in the RFC, but if we provide an unsound implementation until the as yet unmerged C++ proposal is implemented in LLVM in the future, that seems to be a problem.
(Also, if the language provides the functionality necessary to implement this soundly in Rust, the ecosystem can implement this soundly as well without inline assembly.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked into the details yet of what's possible today with LLVM. There's a few possible outcomes:
- We wait until LLVM supports this. (Or contribute it to LLVM.) This feature is delayed until some point in the future when we can rely on an LLVM version that includes it.
- Until LLVM supports it, we use a theoretically unsound but known-to-work-today hack like
ptr::{read_volatile, write_volatile}
combined with a fence. In the standard library we can more easily rely on implementation details of today's compiler. - We use the existing
llvm.memcpy.element.unordered.atomic
, after figuring out the consequences of theunordered
property. - Until LLVM supports appears, we implement it in the library using a loop of
AtomicUsize::load()
/store()
s and a fence, possibly using an efficient inline assembly alternative for some popular architectures.
I'm not fully sure yet which of these are feasible.
I'm very familiar with the standard Rust and C++ memory orderings, but I don't know much about llvm's (It seems |
but it's easy to accidentally cause undefined behavior by using `load` | ||
to make an extra copy of data that shouldn't be copied. | ||
|
||
- Naming: `AtomicPerByte`? `TearableAtomic`? `NoDataRace`? `NotQuiteAtomic`? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given these options and considering what the C++ paper chose, AtomicPerByte
sounds OK and has the advantage of having Atomic
as a prefix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AtomicPerByteMaybeUninit
or AtomicPerByteManuallyDrop
to also resolve the other concern around dropping? Those are terrible names though...
Unordered is not monotonic (as in, it has no total order across all accesses), so LLVM is free to reorder loads/stores in ways it would not be allowed to with Relaxed (it behaves a lot more like a non-atomic variable in this sense) In practical terms, in single-thread scenarios it behaves as expected, but when you load an atomic variable with unordered where the previous writer was another thread, you basically have to be prepared for it to hand you back any value previously written by that thread, due to the reordering allowed. Concretely, I don't know how we'd implement relaxed ordering by fencing without having that fence have a cost on weakly ordered machines (e.g. without implementing it as an overly-strong acquire/release fence). That said, I think we could add an intrinsic to LLVM that does what we want here. I just don't think it already exists. (FWIW, another part of the issue is that this stuff is not that well specified, but it's likely described by the "plain" accesses explained in https://www.cs.tau.ac.il/~orilahav/papers/popl17.pdf) |
CC @RalfJung who has stronger opinions on Unordered (and is the one who provided that link in the past). I think we can easily implement this with relaxed in compiler-builtins though, but it should get a new intrinsic, since many platforms can implement it more efficiently. |
We already have unordered atomic memcpy intrinsics in compiler-builtins. For 1, 2, 4 and 8 byte access sizes. |
I'm not sure we'd want unordered, as mentioned above... |
To clarify on the difference between relaxed and unordered (in terms of loads and stores), if you have static ATOM: AtomicU8 = AtomicU8::new(0);
const O: Ordering = ???;
fn thread1() {
ATOM.store(1, O);
ATOM.store(2, O);
}
fn thread2() {
let a = ATOM.load(O);
let b = ATOM.load(O);
assert!(a <= b);
}
In other words, for unordered, it would be legal for 2 to be stored before 1, or for |
something that could work but not be technically correct is: those fences are no-ops at runtime, but prevent the compiler from reordering the unordered atomics -- assuming your on any modern cpu (except Alpha iirc) it will behave like relaxed atomics because that's what standard load/store instructions do. |
Those fences aren't always no-ops at runtime, they actually emit code on several platforms (rust-lang/rust#62256). It's also unclear what can and can't be reordered across compiler fences (rust-lang/unsafe-code-guidelines#347), certainly plain stores can in some cases (this is easy to show happening in godbolt). Either way, my point has not been that we can't implement this. We absolutely can and it's probably even straightforward. My point is just that I don't really think those existing intrinsics help us do that. |
I like |
loop { | ||
let s1 = self.seq.load(Acquire); | ||
let data = read_data(&self.data, Acquire); | ||
let s2 = self.seq.load(Relaxed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's something very subtle here that I had not appreciated until a few weeks ago: we have to ensure that the load
here cannot return an outdated value that would prevent us from noticing a seqnum bump.
The reason this is the case is that if there is a concurrent write
, and if any
part of data
reads from that write, then we have a release-acquire pair, so then we are guaranteed to see at least the first fetch_add
from write
, and thus we will definitely see a version conflict. OTOH if the s1
reads-from some second fetch_add
in write
, then that forms a release-acquire pair, and we will definitely see the full data.
So, all the release/acquire are necessary here. (I know this is not a seqlock tutorial, and @m-ou-se is certainly aware of this, but it still seemed worth pointing out -- many people reading this will not be aware of this.)
(This is related to this comment by @cbeuw.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah exactly. This is why people are sometimes asking for a "release-load" operation. This second load operation needs to happen "after" the read_data()
part, but the usual (incorrect) read_data
implementation doesn't involve atomic operations or a memory ordering, so they attempt to solve this issue with a memory ordering on that final load, which isn't possible. The right solution is a memory ordering on the read_data()
operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under a reordering based atomic model (as CPUs use), a release load makes sense and works. Release loads don't really work unless they are also RMWs (fetch_add(0)
) under the C11 model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the famous seqlock paper discusses "read dont-modify write" operations.
while the second one is basically a memory fence followed by series of `AtomicU8::store`s. | ||
Except the implementation can be much more efficient. | ||
The implementation is allowed to load/store the bytes in any order, | ||
and doesn't have to operate on individual bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "load/store bytes in any order" part is quite tricky, and I think means that the specification needs to be more complicated to allow for that.
I was originally thinking this would be specified as a series of AtomicU8
load/store with the respective order, no fence involved. That would still allow merging adjacent writes (I think), but it would not allow reordering bytes. I wonder if we could get away with that, or if implementations actually need the ability to reorder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a memcpy (meaning the two regions are exclusive) you generally want to copy using increasing address order ("forward") on all hardware I've ever heard of. Even if a forward copy isn't faster (which it often is), it's still the same speed as a reverse copy.
I suspect the "any order is allowed" is just left in as wiggle room for potentially strange situations where somehow a reverse order copy would improve performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "load/store bytes in any order" part is quite tricky, and I think means that the specification needs to be more complicated to allow for that.
A loop of relaxed load/store operations followed/preceded by an acquire/release fence already effectively allows for the relaxed operations to happen in any order, right?
I was originally thinking this would be specified as a series of AtomicU8 load/store with the respective order, no fence involved.
In the C++ paper they are basically as:
for (size_t i = 0; i < count; ++i) { reinterpret_cast<char*>(dest)[i] = atomic_ref<char>(reinterpret_cast<char*>(source)[i]).load(memory_order::relaxed); } atomic_thread_fence(order);
and
atomic_thread_fence(order); for (size_t i = 0; i < count; ++i) { atomic_ref<char>(reinterpret_cast<char*>(dest)[i]).store( reinterpret_cast<char*>(source)[i], memory_order::relaxed); }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A loop of relaxed load/store operations followed/preceded by an acquire/release fence already effectively allows for the relaxed operations to happen in any order, right?
Yes, relaxed loads/stores to different locations can be reordered, so specifying their order is moot under the as-if rule.
In the C++ paper they are basically as:
Hm... but usually fences and accesses are far from equivalent. If we specify them like this, calling code can rely on the presence of these fences. For example changing a 4-byte atomic acquire memcpy to an AtomicU32 acquire load would not be correct (even if we know everything is initialized and aligned etc).
Fence make all preceding/following relaxed accesses potentially induce synchronization, whereas release/acquire accesses only do that for that particular access.
Yeah, I don't think we should expose Unordered to users in any way until we are ready and willing to have our own concurrency memory model separate from that of C++ (or until C++ has something like unordered, and it's been shown to also make sense formally). There are some formal memory models with "plain" memory accesses, which are similar to unordered (no total mo order but race conditions allowed), but I have no idea if those are an accurate model of LLVM's unordered accesses. Both serve the same goal though, so there's a high chance they are at least related: both aim to model Java's regular memory accesses.
Well I sure hope we're not using them in any way that actually becomes observable in program behavior, as that would be unsound. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This RFC has now been stuck for nearly a year on this. And this is only going to be unstable. Could we not select something, get into nightly and see how it works? Isn't that was nightly is for: figuring out and improving. It isn't like this API is going to be insta-stable. I don't think this should be a blocker at this point. |
I don't think the API is the only open issue. AFAIK LLVM does not even support these operations yet. Also, I don't know if their interactions with the memory model have been fully worked out -- typically, mixed-size accessed are UB, so what if I do two racing "atomic memcpy" of different size on the same memory? Or what if such an "atomic memcpy" races with an overlapping regular atomic access? |
Isn't a freeze of [any ordering] per-byte LLVM load a valid compilation of this? Since all loads:
of course, it might not be the "most general" compilation, but I don't see why that matters for correctness. Also, the funny thing is that if there was an unstable way of having this behavior, then an asm block would magically be able to "mimic" it in the operational semantics and work, even if there is no way for a programmer to type it out. You could even say that "AtomicPerByte loads are valid Rust operational semantics with no syntax", and then asm blocks will be guaranteed to work (as far as I know, this requires no change to the current compiler, since all optimizations work well in the existence of AtomicPerByte loads within memory-reading "asm volatile" blocks [again this is different from not being able to optimize an AtomicPerByte load in the same way you optimize a UB-on-racy-read load, that's a different question, the compiler can't optimize a memory-reading asm volatile block as if it was a UB-on-racy-read load]). |
However, the use cases for this are very limited, it would require a new trait to mark the types for which this is valid, | ||
and it makes the API a lot more complicated or verbose to use. | ||
|
||
Also, such a API for safely handling torn values can be built on top of the proposed API, | ||
so we can leave that to a (niche) ecosystem crate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think this case is as obscure as it is made out to be. I have been wrapping homogeneous-size torn memory operations in the context of images (i.e. a library having the luxury of defining one normative underlying atomic unit, and then building on that for Pod types). The argument sounds okay with SeqLock
as the most immediate use case but there are others for a MaybeTorn
itself. Also, I don't think the arguments against a trait and against MaybeTorn<_>
should be conflated. To expand on these points:
There are several ways in which write/read races may be unproblematic for an algorithm but without having a reliable proof of data-race freedom, to the compiler or in Rust's machine model, that would be necessary when implementing said algorithm without atomics. Some that come to mind, though it'd be nice to have them more concretely:
- In graphics applications one may choose not to care about a tear if it is, as determined by out-of-bands means, somehow minor. The contents being copied mostly are simple components such as floats, integers, blocks of bytes that might not even have padding.
- In parallelized numerical algorithms we may split some matrix into smaller subregions to be computed. Here the tear-freedom could be guaranteed by scheduling operations in the correct sequence while the underlying data is also simple numerical types. Importantly, with the use of
AtomicPerByte
tears could become simple bugs in the scheduler, not soundness issues with the algorithms. These will would definitely prefer simple access to the bytes, assuming they are correct. - In networking packets may arrive with checksums and we could choose to filter them later in a processing pipeline based on that in-band signal rather than on a separately tracked SeqLock, or generally speaking this algorithm would move around
MaybeTorn
value itself before turning it into aT
by validating that type's invariants. If these invariants aren't representational but only safety invariants then I would prefer a safe way of doing so, i.e. exposed like a standard constructor, as in these situations no part of the memory model is violated in the first place.
Then note that for the ecosystem, e.g. bytemuck or zerocopy, to provide convenient methods for access to Pod data, they benefit greatly from a vocabulary type to talk about that representation. This can not be MaybeUninit<T>
. (It is of course entirely sound to have a utility method MaybeTorn<T>::into_uninit(Self) -> MaybeUninit<T>
). The representational invariants are quite distinct. For instance in this speculated-to-be-highly-relevant API they are not interchangeable:
// crate bytemuck;
unsafe impl<T: Pod> Pod for MaybeTorn<T> {}
Under the alternative, these crates must provide special methods for the AtomicPerByte
receiver type, and these will duplicate all the different ways of loading and take their own Ordering
argument. This duplication will be multiplied by the number of different store/load pairs such as for slice, in the light of an unsafe
pointer-self method with safety requirements, or interactions with unaligned representations. This is all mental overhead of extension traits and these wrappers needing to justify internally behaving exactly like the standard version, casting their source to MaybeUninit<T>
is rather obscure in itself. This is also not really ergonomic.
@m-ou-se Could you expand on the interfaces you have tried, that did not work out? I was thinking along the lines of using a sealed trait. (Borrow
could be made to work, technically I think, but it adds confusion).
trait BytesOf<T> {
/// Just for demonstration, we probably want another method operating on pointers here..
fn borrow(&self) -> &MaybeTorn<T>;
}
impl<T> AtomicPerByte<T> {
pub fn store(&self, _l: impl BytesOf<T>, _: Ordering);
}
impl<T> BytesOf<T> for T {
fn borrow(&self) -> &MaybeTorn<T> {
// SAFETY: repr(transparent), I think
unsafe { transmute(self) }
}
}
impl<T> BytesOf<T> for MaybeTorn<T> { … }
/// If you want to ignore `Copy`, and just get bytes, use the wrapping into
/// `MaybeTorn<T>` which does not have the bound.
impl<T: Copy> BytesOf<T> for &'_ T { … }
impl<T> BytesOf<T> for &'_ MaybeTorn<T> { … }
// Even after it was manually dropped, bytes are still valid.
impl<T> BytesOf<T> for ManuallyDrop<T> { … }
This can be called with all of T
, &T
, MaybeTorn<T>
and ManuallyDrop<T>
. It doesn't immediately address if these are confusing. Instead, one could gate the impl on T
for T: Copy
since it can always be made explicit ManuallyDrop
when that is intended.
In the context I am using these in, interactions with slices of data are a big reason for using these APIs in that context. This makes sense from the motivation: if this should build an analogue for memcpy
, then the array operations should be considered.
From that experience, I expect another interface extensions to be interesting:
copy(this: &AtomicPerByte<T>, into: &AtomicPerByte<T>, _: Ordering)
- the slice equivalent of this.
- possibly:
store_cell(&self, into: &Cell<T>, _: Ordering)
; note that&Cell<MaybeTorn<T>>
is the equivalent output type for loads and slices. This may be punted to the ecosystem albeit with the question of how. Casting to&mut MaybeUninit<T>
is at least incorrect for the load variants.
As the lifted analogue to the pure memory operation, this is more performant than the manual loop version around an intermediate buffer (On a tangential note, I've missed that on Cell<impl Copy>
as well but the optimizer is mostly more reliable here).
No, these operations have an Also, we need stores as well, not just loads. |
SeqLock vs. "loads from buggy threads"It certainly seems that a bunch of people (including me) read the description, and think that this is the API is intended to be used for the case of loading from a potentially-buggy thread (let's leave aside malicious threads that interact with the Rust abstract machine on the Assembly level but talk only about threads that are written in Rust but might have a "safe" race condition bug). This is not the API for that, and it probably needs to be written in the RFC. This is true in 2 directions:
More about "loads from buggy threads" (should probably be in a different RFC)For the "communicating with an potentially-buggy thread" case, regular LLVM loads ought to work fine according to what I believe is the LLVM semantics - you don't need any happens-before operations. Either the thread is buggy and you don't care which results you get, or the thread is well-functioning, and you will not have any races (tho in that case, inline asm can work fine, since the "simulator" can decide "buginess" based on the environment and choose whether to not load anything return nondeterministic values for a buggy partner thread or do a full Rust load for a non-buggy partner thread). Back to SeqLocksFor SeqLocks, having writes and reads have the semantics of But that is something you can already write in today's Rust, and AFAICT on the architectures people care about is not more defined than an inline asm memcpy + the ordering-correct memory barrier (which generates the assembly that has the performance characteristics people want), which means that inline asm barrier-including An "unspecified sequence of
An aside: compiler barriersDoing a "poison + freeze" LLVM memcpy + a compiler barrier + a memory barrier "appears to work" for seqlocks. I don't see a strong enough reason to model it as that as opposed to a bunch of It might be interesting from a theoretical point of view to have a justification for "why does this work", but that requires quite a big model of the memory model. |
Rendered