Skip to content

[RFC] AtomicPerByte (aka "atomic memcpy") #3301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

m-ou-se
Copy link
Member

@m-ou-se m-ou-se commented Aug 14, 2022

@m-ou-se m-ou-se added the T-libs-api Relevant to the library API team, which will review and decide on the RFC. label Aug 14, 2022
@bjorn3
Copy link
Member

bjorn3 commented Aug 14, 2022

cc @ojeda

@ibraheemdev
Copy link
Member

ibraheemdev commented Aug 14, 2022

This could mention the atomic-maybe-uninit crate in the alternatives section (cc @taiki-e).

@5225225
Copy link

5225225 commented Aug 14, 2022

With some way for the language to be able to express "this type is valid for any bit pattern", which project safe transmute presumably will provide (and that exists in the ecosystem as bytemuck and zerocopy and probably others), I'm wondering if it would be better to return an AtomicPerByteRead<T>(MaybeUninit<T>) which we/the ecosystem could provide a safe into_inner (returning a T) if T is valid for any bit pattern.

This would also require removing the safe uninit method. But you could always presumably do an AtomicPerByte<MaybeUninit<T>> with no runtime cost to passing MaybeUninit::uninit() to new.

That's extra complexity, but means that with some help from the ecosystem/future stdlib work, this can be used in 100% safe code, if the data is fine with being torn.

@Lokathor
Copy link
Contributor

The "uninit" part of MaybeUninit is essentially not a bit pattern though. That's the problem. Even if a value is valid "for all bit patterns", you can't unwrap uninit memory into that type.

not without the fabled and legendary Freeze Intrinsic anyway.

@T-Dark0
Copy link

T-Dark0 commented Aug 14, 2022

On the other hand, AnyBitPatternOrPointerFragment isn't a type we have, nor really a type we strictly need for this. Assuming tearing can't deinitialize initialized memory, then MaybeUninit would suffice I think?

@programmerjake
Copy link
Member

note that LLVM already implements this operation:
llvm.memcpy.element.unordered.atomic Intrinsic
with an additional fence operation for acquire/release.

@comex
Copy link

comex commented Aug 15, 2022

The trouble with that intrinsic is that unordered is weaker than monotonic aka Relaxed, and it can't easily be upgraded. There's no "relaxed fence" if the ordering you want is Relaxed; and even if the ordering you want is Acquire or Release, combining unordered atomic accesses with fences doesn't produce quite the same result. Fences provide additional guarantees regarding other memory accessed before/after the atomic access, but they don't do anything to restore the missing "single total order" per address of the atomic accesses themselves.

Comment on lines +180 to +181
- In order for this to be efficient, we need an additional intrinsic hooking into
special support in LLVM. (Which LLVM needs to have anyway for C++.)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you plan to implement this until LLVM implements this?

I don't think it is necessary to explain the implementation details in the RFC, but if we provide an unsound implementation until the as yet unmerged C++ proposal is implemented in LLVM in the future, that seems to be a problem.

(Also, if the language provides the functionality necessary to implement this soundly in Rust, the ecosystem can implement this soundly as well without inline assembly.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked into the details yet of what's possible today with LLVM. There's a few possible outcomes:

  • We wait until LLVM supports this. (Or contribute it to LLVM.) This feature is delayed until some point in the future when we can rely on an LLVM version that includes it.
  • Until LLVM supports it, we use a theoretically unsound but known-to-work-today hack like ptr::{read_volatile, write_volatile} combined with a fence. In the standard library we can more easily rely on implementation details of today's compiler.
  • We use the existing llvm.memcpy.element.unordered.atomic, after figuring out the consequences of the unordered property.
  • Until LLVM supports appears, we implement it in the library using a loop of AtomicUsize::load()/store()s and a fence, possibly using an efficient inline assembly alternative for some popular architectures.

I'm not fully sure yet which of these are feasible.

@m-ou-se
Copy link
Member Author

m-ou-se commented Aug 15, 2022

The trouble with that intrinsic is that unordered is weaker than monotonic aka Relaxed, and it can't easily be upgraded. There's no "relaxed fence" if the ordering you want is Relaxed; and even if the ordering you want is Acquire or Release, combining unordered atomic accesses with fences doesn't produce quite the same result. Fences provide additional guarantees regarding other memory accessed before/after the atomic access, but they don't do anything to restore the missing "single total order" per address of the atomic accesses themselves.

I'm very familiar with the standard Rust and C++ memory orderings, but I don't know much about llvm's unordered ordering. Could you give an example of unexpected results we might get if we were to implement AtomicPerByte<T>::{read, write} using llvm's unordered primitive and a fence? Thanks!

(It seems monotonic is behaves identically to unordered for loads and stores?)

but it's easy to accidentally cause undefined behavior by using `load`
to make an extra copy of data that shouldn't be copied.

- Naming: `AtomicPerByte`? `TearableAtomic`? `NoDataRace`? `NotQuiteAtomic`?
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given these options and considering what the C++ paper chose, AtomicPerByte sounds OK and has the advantage of having Atomic as a prefix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AtomicPerByteMaybeUninit or AtomicPerByteManuallyDrop to also resolve the other concern around dropping? Those are terrible names though...

@ojeda
Copy link

ojeda commented Aug 15, 2022

cc @ojeda

Thanks! Cc'ing @wedsonaf since he will like it :)

@thomcc
Copy link
Member

thomcc commented Aug 15, 2022

Unordered is not monotonic (as in, it has no total order across all accesses), so LLVM is free to reorder loads/stores in ways it would not be allowed to with Relaxed (it behaves a lot more like a non-atomic variable in this sense)

In practical terms, in single-thread scenarios it behaves as expected, but when you load an atomic variable with unordered where the previous writer was another thread, you basically have to be prepared for it to hand you back any value previously written by that thread, due to the reordering allowed.

Concretely, I don't know how we'd implement relaxed ordering by fencing without having that fence have a cost on weakly ordered machines (e.g. without implementing it as an overly-strong acquire/release fence).

That said, I think we could add an intrinsic to LLVM that does what we want here. I just don't think it already exists.

(FWIW, another part of the issue is that this stuff is not that well specified, but it's likely described by the "plain" accesses explained in https://www.cs.tau.ac.il/~orilahav/papers/popl17.pdf)

@thomcc
Copy link
Member

thomcc commented Aug 15, 2022

CC @RalfJung who has stronger opinions on Unordered (and is the one who provided that link in the past).

I think we can easily implement this with relaxed in compiler-builtins though, but it should get a new intrinsic, since many platforms can implement it more efficiently.

@bjorn3
Copy link
Member

bjorn3 commented Aug 15, 2022

We already have unordered atomic memcpy intrinsics in compiler-builtins. For 1, 2, 4 and 8 byte access sizes.

@thomcc
Copy link
Member

thomcc commented Aug 15, 2022

I'm not sure we'd want unordered, as mentioned above...

@thomcc
Copy link
Member

thomcc commented Aug 16, 2022

To clarify on the difference between relaxed and unordered (in terms of loads and stores), if you have

static ATOM: AtomicU8 = AtomicU8::new(0);
const O: Ordering = ???;

fn thread1() {
    ATOM.store(1, O);
    ATOM.store(2, O);
}

fn thread2() {
    let a = ATOM.load(O);
    let b = ATOM.load(O);
    assert!(a <= b);
}

thread2 will never assert if O is Relaxed, but it could if O is (the hypothetical) Unordered.

In other words, for unordered, it would be legal for 2 to be stored before 1, or for b to be loaded before a. In terms of fences, there's no fence that "upgrades" unordered to relaxed, although I believe (but am not certain) that stronger fences do apply to it.

@programmerjake
Copy link
Member

something that could work but not be technically correct is:
compiler acquire fence
unordered atomic memcpy
compiler release fence

those fences are no-ops at runtime, but prevent the compiler from reordering the unordered atomics -- assuming your on any modern cpu (except Alpha iirc) it will behave like relaxed atomics because that's what standard load/store instructions do.

@thomcc
Copy link
Member

thomcc commented Aug 16, 2022

Those fences aren't always no-ops at runtime, they actually emit code on several platforms (rust-lang/rust#62256). It's also unclear what can and can't be reordered across compiler fences (rust-lang/unsafe-code-guidelines#347), certainly plain stores can in some cases (this is easy to show happening in godbolt).

Either way, my point has not been that we can't implement this. We absolutely can and it's probably even straightforward. My point is just that I don't really think those existing intrinsics help us do that.

@tschuett
Copy link

I like MaybeAtomic, but following C++ with AtomicPerByte sounds reasonable.
The LLVM guys started something similar in 2016:
https://reviews.llvm.org/D27133

loop {
let s1 = self.seq.load(Acquire);
let data = read_data(&self.data, Acquire);
let s2 = self.seq.load(Relaxed);
Copy link
Member

@RalfJung RalfJung Aug 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's something very subtle here that I had not appreciated until a few weeks ago: we have to ensure that the load here cannot return an outdated value that would prevent us from noticing a seqnum bump.

The reason this is the case is that if there is a concurrent write, and if any
part of data reads from that write, then we have a release-acquire pair, so then we are guaranteed to see at least the first fetch_add from write, and thus we will definitely see a version conflict. OTOH if the s1 reads-from some second fetch_add in write, then that forms a release-acquire pair, and we will definitely see the full data.

So, all the release/acquire are necessary here. (I know this is not a seqlock tutorial, and @m-ou-se is certainly aware of this, but it still seemed worth pointing out -- many people reading this will not be aware of this.)

(This is related to this comment by @cbeuw.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly. This is why people are sometimes asking for a "release-load" operation. This second load operation needs to happen "after" the read_data() part, but the usual (incorrect) read_data implementation doesn't involve atomic operations or a memory ordering, so they attempt to solve this issue with a memory ordering on that final load, which isn't possible. The right solution is a memory ordering on the read_data() operation.

Copy link
Member

@ibraheemdev ibraheemdev Aug 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under a reordering based atomic model (as CPUs use), a release load makes sense and works. Release loads don't really work unless they are also RMWs (fetch_add(0)) under the C11 model.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the famous seqlock paper discusses "read dont-modify write" operations.

while the second one is basically a memory fence followed by series of `AtomicU8::store`s.
Except the implementation can be much more efficient.
The implementation is allowed to load/store the bytes in any order,
and doesn't have to operate on individual bytes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "load/store bytes in any order" part is quite tricky, and I think means that the specification needs to be more complicated to allow for that.

I was originally thinking this would be specified as a series of AtomicU8 load/store with the respective order, no fence involved. That would still allow merging adjacent writes (I think), but it would not allow reordering bytes. I wonder if we could get away with that, or if implementations actually need the ability to reorder.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a memcpy (meaning the two regions are exclusive) you generally want to copy using increasing address order ("forward") on all hardware I've ever heard of. Even if a forward copy isn't faster (which it often is), it's still the same speed as a reverse copy.

I suspect the "any order is allowed" is just left in as wiggle room for potentially strange situations where somehow a reverse order copy would improve performance.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "load/store bytes in any order" part is quite tricky, and I think means that the specification needs to be more complicated to allow for that.

A loop of relaxed load/store operations followed/preceded by an acquire/release fence already effectively allows for the relaxed operations to happen in any order, right?

I was originally thinking this would be specified as a series of AtomicU8 load/store with the respective order, no fence involved.

In the C++ paper they are basically as:

for (size_t i = 0; i < count; ++i) {
  reinterpret_cast<char*>(dest)[i] =
      atomic_ref<char>(reinterpret_cast<char*>(source)[i]).load(memory_order::relaxed);
}
atomic_thread_fence(order);

and

atomic_thread_fence(order);
for (size_t i = 0; i < count; ++i) {
  atomic_ref<char>(reinterpret_cast<char*>(dest)[i]).store(
      reinterpret_cast<char*>(source)[i], memory_order::relaxed);
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A loop of relaxed load/store operations followed/preceded by an acquire/release fence already effectively allows for the relaxed operations to happen in any order, right?

Yes, relaxed loads/stores to different locations can be reordered, so specifying their order is moot under the as-if rule.

In the C++ paper they are basically as:

Hm... but usually fences and accesses are far from equivalent. If we specify them like this, calling code can rely on the presence of these fences. For example changing a 4-byte atomic acquire memcpy to an AtomicU32 acquire load would not be correct (even if we know everything is initialized and aligned etc).

Fence make all preceding/following relaxed accesses potentially induce synchronization, whereas release/acquire accesses only do that for that particular access.

@RalfJung
Copy link
Member

RalfJung commented Aug 20, 2022

CC @RalfJung who has stronger opinions on Unordered (and is the one who provided that link in the past).

Yeah, I don't think we should expose Unordered to users in any way until we are ready and willing to have our own concurrency memory model separate from that of C++ (or until C++ has something like unordered, and it's been shown to also make sense formally). There are some formal memory models with "plain" memory accesses, which are similar to unordered (no total mo order but race conditions allowed), but I have no idea if those are an accurate model of LLVM's unordered accesses. Both serve the same goal though, so there's a high chance they are at least related: both aim to model Java's regular memory accesses.

We already have unordered atomic memcpy intrinsics in compiler-builtins. For 1, 2, 4 and 8 byte access sizes.

Well I sure hope we're not using them in any way that actually becomes observable in program behavior, as that would be unsound.

@ais523

This comment was marked as off-topic.

@DemiMarie

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@ais523

This comment was marked as off-topic.

@DemiMarie

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@ais523

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@programmerjake

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@arielb1

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@arielb1

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@comex

This comment was marked as off-topic.

@DemiMarie

This comment was marked as off-topic.

@ais523

This comment was marked as off-topic.

@comex

This comment was marked as off-topic.

@DemiMarie

This comment was marked as off-topic.

@VorpalBlade

This comment was marked as off-topic.

@DemiMarie

This comment was marked as off-topic.

@RalfJung

This comment was marked as off-topic.

@VorpalBlade
Copy link

The only thing left I'm still struggling with is the signature of the store method(s):

      pub fn store(&self, value: MaybeUninit<T>, ordering: Ordering);
// or
      pub fn store(&self, value: &MaybeUninit<T>, ordering: Ordering);
// or
      pub fn store(&self, value: T, ordering: Ordering);
// or
      pub fn store(&self, value: T, ordering: Ordering) where T: Copy;
// or
      pub fn store(&self, value: &T, ordering: Ordering);
// or
      pub fn store(&self, value: &T, ordering: Ordering) where T: Copy;

Or a combination of these (store and store_from).

Taking by value fits the most basic use case, but consuming the value can be annoying if you need to attempt a store multiple times. However, taking by reference can get weird for non-Copy/needs-drop types. Wrapping it in a MaybeUninit makes the Drop situation clearer, but taking that by reference can be annoying if you have a &T and need a &MaybeUninit<T> here. :/

This RFC has now been stuck for nearly a year on this. And this is only going to be unstable. Could we not select something, get into nightly and see how it works? Isn't that was nightly is for: figuring out and improving. It isn't like this API is going to be insta-stable.

I don't think this should be a blocker at this point.

@RalfJung
Copy link
Member

RalfJung commented May 7, 2025

I don't think the API is the only open issue. AFAIK LLVM does not even support these operations yet.

Also, I don't know if their interactions with the memory model have been fully worked out -- typically, mixed-size accessed are UB, so what if I do two racing "atomic memcpy" of different size on the same memory? Or what if such an "atomic memcpy" races with an overlapping regular atomic access?

@arielb1
Copy link
Contributor

arielb1 commented May 7, 2025

AFAIK LLVM does not even support these operations yet

Isn't a freeze of [any ordering] per-byte LLVM load a valid compilation of this? Since all loads:

  1. return the right value when there is only a single relevant write
  2. return something not less defined than an undef when there is more than one relevant write, which a freeze will turn to an undefined value.

of course, it might not be the "most general" compilation, but I don't see why that matters for correctness.

Also, the funny thing is that if there was an unstable way of having this behavior, then an asm block would magically be able to "mimic" it in the operational semantics and work, even if there is no way for a programmer to type it out. You could even say that "AtomicPerByte loads are valid Rust operational semantics with no syntax", and then asm blocks will be guaranteed to work (as far as I know, this requires no change to the current compiler, since all optimizations work well in the existence of AtomicPerByte loads within memory-reading "asm volatile" blocks [again this is different from not being able to optimize an AtomicPerByte load in the same way you optimize a UB-on-racy-read load, that's a different question, the compiler can't optimize a memory-reading asm volatile block as if it was a UB-on-racy-read load]).

Comment on lines +255 to +259
However, the use cases for this are very limited, it would require a new trait to mark the types for which this is valid,
and it makes the API a lot more complicated or verbose to use.

Also, such a API for safely handling torn values can be built on top of the proposed API,
so we can leave that to a (niche) ecosystem crate.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think this case is as obscure as it is made out to be. I have been wrapping homogeneous-size torn memory operations in the context of images (i.e. a library having the luxury of defining one normative underlying atomic unit, and then building on that for Pod types). The argument sounds okay with SeqLock as the most immediate use case but there are others for a MaybeTorn itself. Also, I don't think the arguments against a trait and against MaybeTorn<_> should be conflated. To expand on these points:

There are several ways in which write/read races may be unproblematic for an algorithm but without having a reliable proof of data-race freedom, to the compiler or in Rust's machine model, that would be necessary when implementing said algorithm without atomics. Some that come to mind, though it'd be nice to have them more concretely:

  • In graphics applications one may choose not to care about a tear if it is, as determined by out-of-bands means, somehow minor. The contents being copied mostly are simple components such as floats, integers, blocks of bytes that might not even have padding.
  • In parallelized numerical algorithms we may split some matrix into smaller subregions to be computed. Here the tear-freedom could be guaranteed by scheduling operations in the correct sequence while the underlying data is also simple numerical types. Importantly, with the use of AtomicPerByte tears could become simple bugs in the scheduler, not soundness issues with the algorithms. These will would definitely prefer simple access to the bytes, assuming they are correct.
  • In networking packets may arrive with checksums and we could choose to filter them later in a processing pipeline based on that in-band signal rather than on a separately tracked SeqLock, or generally speaking this algorithm would move around MaybeTorn value itself before turning it into a T by validating that type's invariants. If these invariants aren't representational but only safety invariants then I would prefer a safe way of doing so, i.e. exposed like a standard constructor, as in these situations no part of the memory model is violated in the first place.

Then note that for the ecosystem, e.g. bytemuck or zerocopy, to provide convenient methods for access to Pod data, they benefit greatly from a vocabulary type to talk about that representation. This can not be MaybeUninit<T>. (It is of course entirely sound to have a utility method MaybeTorn<T>::into_uninit(Self) -> MaybeUninit<T>). The representational invariants are quite distinct. For instance in this speculated-to-be-highly-relevant API they are not interchangeable:

// crate bytemuck;
unsafe impl<T: Pod> Pod for MaybeTorn<T> {}

Under the alternative, these crates must provide special methods for the AtomicPerByte receiver type, and these will duplicate all the different ways of loading and take their own Ordering argument. This duplication will be multiplied by the number of different store/load pairs such as for slice, in the light of an unsafe pointer-self method with safety requirements, or interactions with unaligned representations. This is all mental overhead of extension traits and these wrappers needing to justify internally behaving exactly like the standard version, casting their source to MaybeUninit<T> is rather obscure in itself. This is also not really ergonomic.


@m-ou-se Could you expand on the interfaces you have tried, that did not work out? I was thinking along the lines of using a sealed trait. (Borrow could be made to work, technically I think, but it adds confusion).

trait BytesOf<T> {
    /// Just for demonstration, we probably want another method operating on pointers here..
    fn borrow(&self) -> &MaybeTorn<T>;
}

impl<T> AtomicPerByte<T> {
    pub fn store(&self, _l: impl BytesOf<T>, _: Ordering);
}

impl<T> BytesOf<T> for T {
    fn borrow(&self) -> &MaybeTorn<T> {
        // SAFETY: repr(transparent), I think
        unsafe { transmute(self) }
    }
}

impl<T> BytesOf<T> for MaybeTorn<T> {}

/// If you want to ignore `Copy`, and just get bytes, use the wrapping into
/// `MaybeTorn<T>` which does not have the bound.
impl<T: Copy> BytesOf<T> for &'_ T {}
impl<T> BytesOf<T> for &'_ MaybeTorn<T> {}
// Even after it was manually dropped, bytes are still valid.
impl<T> BytesOf<T> for ManuallyDrop<T> {}

This can be called with all of T, &T, MaybeTorn<T> and ManuallyDrop<T> . It doesn't immediately address if these are confusing. Instead, one could gate the impl on T for T: Copy since it can always be made explicit ManuallyDrop when that is intended.


In the context I am using these in, interactions with slices of data are a big reason for using these APIs in that context. This makes sense from the motivation: if this should build an analogue for memcpy, then the array operations should be considered.

From that experience, I expect another interface extensions to be interesting:

  • copy(this: &AtomicPerByte<T>, into: &AtomicPerByte<T>, _: Ordering)
  • the slice equivalent of this.
  • possibly: store_cell(&self, into: &Cell<T>, _: Ordering); note that &Cell<MaybeTorn<T>> is the equivalent output type for loads and slices. This may be punted to the ecosystem albeit with the question of how. Casting to &mut MaybeUninit<T> is at least incorrect for the load variants.

As the lifted analogue to the pure memory operation, this is more performant than the manual loop version around an intermediate buffer (On a tangential note, I've missed that on Cell<impl Copy> as well but the optimizer is mostly more reliable here).

@RalfJung
Copy link
Member

RalfJung commented May 8, 2025

Isn't a freeze of [any ordering] per-byte LLVM load a valid compilation of this? Since all loads:

No, these operations have an Ordering so they need to have the right acquire/release/SC semantics (and even relaxed accesses need to properly synchronize when there's a fence). This excludes using regular loads. (I assume you meant using regular loads as otherwise the mention of freeze makes no sense to me.) As for using atomic loads, we'd still run into the mixed-size-access issues.

Also, we need stores as well, not just loads.

@arielb1
Copy link
Contributor

arielb1 commented May 8, 2025

SeqLock vs. "loads from buggy threads"

It certainly seems that a bunch of people (including me) read the description, and think that this is the API is intended to be used for the case of loading from a potentially-buggy thread (let's leave aside malicious threads that interact with the Rust abstract machine on the Assembly level but talk only about threads that are written in Rust but might have a "safe" race condition bug).

This is not the API for that, and it probably needs to be written in the RFC. This is true in 2 directions:

  1. This API is "too weak", since it is not designed to guarantee correct behavior wrt. buggy threads - AFAICT it is only intended to have well-defined behavior if all racing writes are done via AtomicPerByte writes.
  2. This API is also "too strong". For SeqLocks, you want the loads to have at least a Relaxed ordering so they can be "upgraded" via an Acquire barrier since otherwise you run into the "no such thing as a release load" problem. In the "potentially buggy" case, observing a non-deterministic result tells you something about the input (that it provoked a bug) rather than about the thread ordering.

More about "loads from buggy threads" (should probably be in a different RFC)

For the "communicating with an potentially-buggy thread" case, regular LLVM loads ought to work fine according to what I believe is the LLVM semantics - you don't need any happens-before operations. Either the thread is buggy and you don't care which results you get, or the thread is well-functioning, and you will not have any races (tho in that case, inline asm can work fine, since the "simulator" can decide "buginess" based on the environment and choose whether to not load anything return nondeterministic values for a buggy partner thread or do a full Rust load for a non-buggy partner thread).

Back to SeqLocks

For SeqLocks, having writes and reads have the semantics of AtomicU8 writes and reads with an unspecified/nondeterministic ordering is definitely strong enough.

But that is something you can already write in today's Rust, and AFAICT on the architectures people care about is not more defined than an inline asm memcpy + the ordering-correct memory barrier (which generates the assembly that has the performance characteristics people want), which means that inline asm barrier-including AtomicPerByte ought to be fine for seqlocks.

An "unspecified sequence of AtomicU8 operations" might be too strong than we want to specify - for example:

  1. It might be smarter to say that the separate reads/writes are unsequenced ("ad hoc concurrent") rather than having unspecified ordering. LLVM does not directly support ad hoc concurrency, but most memory models do and I won't be surprised if because of that LLVM optimizations support ad hoc concurrency as well.
  2. It implies some specific semantics for using AtomicPerByte along with AtomicU8 operations, which AFAICT do the specified thing when using memcpy on the architectures people care about, but we might want to prohibit.

An aside: compiler barriers

Doing a "poison + freeze" LLVM memcpy + a compiler barrier + a memory barrier "appears to work" for seqlocks. I don't see a strong enough reason to model it as that as opposed to a bunch of AtomicU8 loads.

It might be interesting from a theoretical point of view to have a justification for "why does this work", but that requires quite a big model of the memory model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T-libs-api Relevant to the library API team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.