Skip to content

Commit 9330ccd

Browse files
committed
context: add nostd version of global context
My initial plan for this commit was to implement the nostd version without randomization support, and patch it in later. However, I realized that even without rerandomization, I still needed synchronization logic in order to initialize the global context object. (Upstream provides a static "no precomp" context object, but it has no precomputation tables and therefore can't be used for verification, which makes it unusable for our purposes). In order to implement initialization, with ChatGPT's help I implemented a simple spinlock. However, there are a number of problems with spinlocks -- see this article (from Kix in #346) for some of them: https://matklad.github.io/2020/01/02/spinlocks-considered-harmful.html To avoid these problems, we tweak the spinlock logic so that we only try spinning a small finite number of times, then give up. Our "give up" logic is: 1. When initializing the global context, if we can't get the lock, we just initialize a new stack-local context and use that. (A parallel thread must be initializing the context, which is wasteful but harmless.) 2. Once we unlock the context, we copy it onto the stack and re-lock it in order to minimize the time holding the lock. (The exception is during initialization where we hold the lock for the whole initialization, in the hopes that other threads will block on us instead of doing their own initialization.) If we rerandomize, we do this on the stack-local copy and then only re-lock to copy it back. 3. If we fail to get the lock to copy the rerandomized context back, we just don't copy it. The result is that we wasted some time rerandomizing without any benefit, which is not the end of the world. Next steps are: 1. Update the API to use this logic everywhere; on validation functions we don't need to rerandomize and on signing/keygen functions we should rerandomize using our secret key material. 2. Remove the existing "no context" API, along with the global-context and global-context-less-secure features. 3. Improve our entropy story on nostd by scraping system time or CPU jitter or something and hashing that into our rerandomization. We don't need to do a great job here -- if we can get even a bit or two per signature, that will completely BTFO a timing attacker.
1 parent dcba49c commit 9330ccd

File tree

2 files changed

+204
-8
lines changed

2 files changed

+204
-8
lines changed

src/context.rs

Lines changed: 201 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,205 @@ use crate::ffi::types::{c_uint, c_void, AlignedType};
1010
use crate::ffi::{self, CPtr};
1111
use crate::{Error, Secp256k1};
1212

13+
#[cfg(not(feature = "std"))]
14+
mod internal {
15+
use core::cell::UnsafeCell;
16+
use core::hint::spin_loop;
17+
use core::marker::PhantomData;
18+
use core::mem::ManuallyDrop;
19+
use core::ops::{Deref, DerefMut};
20+
use core::ptr::NonNull;
21+
use core::sync::atomic::{AtomicBool, Ordering};
22+
23+
use crate::ffi::types::{c_void, AlignedType};
24+
use crate::{ffi, AllPreallocated, Context, Secp256k1};
25+
26+
const MAX_SPINLOCK_ATTEMPTS: usize = 128;
27+
const MAX_PREALLOC_SIZE: usize = 16; // measured at 208 bytes on Andrew's 64-bit system
28+
29+
static SECP256K1: SpinLock = SpinLock::new();
30+
31+
// Simple spinlock-gated structure which holds the backing store for a
32+
// secp256k1 context.
33+
//
34+
// To obtain exclusive access, call [`Self::try_lock`], which will spinlock
35+
// for some small number of iterations before giving up. By trying again in
36+
// a loop, you can emulate a "true" spinlock that will only yield once it
37+
// has access. However, this would be very dangerous, especially in a nostd
38+
// environment, because if we are pre-empted by an interrupt handler while
39+
// the lock is held, and that interrupt handler attempts to take the lock,
40+
// then we deadlock.
41+
//
42+
// Instead, the strategy we take within this module is to simply create a
43+
// new stack-local context object if we are unable to obtain a lock on the
44+
// global one. This is slow and loses the defense-in-depth "rerandomization"
45+
// anti-sidechannel measure, but it is better than deadlocking..
46+
struct SpinLock {
47+
flag: AtomicBool,
48+
// Invariant: if this is non-None, then the store is valid and can be
49+
// used with `ffi::secp256k1_context_preallocated_create`.
50+
data: UnsafeCell<([AlignedType; MAX_PREALLOC_SIZE], Option<NonNull<ffi::Context>>)>,
51+
}
52+
53+
// Required by rustc if we have a static of this type.
54+
// Safety: `data` is accessed only while the `flag` is held.
55+
unsafe impl Sync for SpinLock {}
56+
unsafe impl Send for SpinLock {}
57+
58+
impl SpinLock {
59+
const fn new() -> Self {
60+
Self {
61+
flag: AtomicBool::new(false),
62+
data: UnsafeCell::new(([AlignedType::ZERO; MAX_PREALLOC_SIZE], None)),
63+
}
64+
}
65+
66+
/// Blocks until the lock is acquired, then returns an RAII guard.
67+
fn try_lock(&self) -> Option<SpinLockGuard<'_>> {
68+
for _ in 0..MAX_SPINLOCK_ATTEMPTS {
69+
// `compare_exchange_weak` is fine here: we’re spinning anyway.
70+
if self
71+
.flag
72+
.compare_exchange_weak(false, true, Ordering::Acquire, Ordering::Relaxed)
73+
.is_ok()
74+
{
75+
return Some(SpinLockGuard { lock: self });
76+
}
77+
spin_loop();
78+
}
79+
None
80+
}
81+
82+
#[inline(always)]
83+
fn unlock(&self) { self.flag.store(false, Ordering::Release); }
84+
}
85+
86+
/// Drops the lock when it goes out of scope.
87+
pub struct SpinLockGuard<'a> {
88+
lock: &'a SpinLock,
89+
}
90+
91+
impl Deref for SpinLockGuard<'_> {
92+
type Target = ([AlignedType; MAX_PREALLOC_SIZE], Option<NonNull<ffi::Context>>);
93+
fn deref(&self) -> &Self::Target {
94+
// Safe: we hold the lock.
95+
unsafe { &*self.lock.data.get() }
96+
}
97+
}
98+
99+
impl DerefMut for SpinLockGuard<'_> {
100+
fn deref_mut(&mut self) -> &mut Self::Target {
101+
// Safe: mutable access is unique while the guard lives.
102+
unsafe { &mut *self.lock.data.get() }
103+
}
104+
}
105+
106+
impl Drop for SpinLockGuard<'_> {
107+
fn drop(&mut self) { self.lock.unlock(); }
108+
}
109+
110+
/// Borrows the global context and do some operation on it.
111+
///
112+
/// If provided, after the operation is complete, [`rerandomize_global_context`]
113+
/// is called on the context. If you have some random data available,
114+
pub fn with_global_context<T, Ctx: Context, F: FnOnce(&Secp256k1<Ctx>) -> T>(
115+
f: F,
116+
rerandomize_seed: Option<&[u8; 32]>,
117+
) -> T {
118+
with_raw_global_context(
119+
|ctx| {
120+
let secp = ManuallyDrop::new(Secp256k1 { ctx, phantom: PhantomData });
121+
f(&*secp)
122+
},
123+
rerandomize_seed,
124+
)
125+
}
126+
127+
/// Borrows the global context as a raw pointer and do some operation on it.
128+
///
129+
/// If provided, after the operation is complete, [`rerandomize_global_context`]
130+
/// is called on the context. If you have some random data available,
131+
pub fn with_raw_global_context<T, F: FnOnce(NonNull<ffi::Context>) -> T>(
132+
f: F,
133+
rerandomize_seed: Option<&[u8; 32]>,
134+
) -> T {
135+
assert!(
136+
unsafe {
137+
ffi::secp256k1_context_preallocated_size(AllPreallocated::FLAGS)
138+
<= core::mem::size_of::<[AlignedType; MAX_PREALLOC_SIZE]>()
139+
},
140+
"prealloc size exceeds our guessed compile-time upper bound"
141+
);
142+
143+
// Our function may be expensive, so before calling it, we copy the global
144+
// context into this local buffer on the stack. Then we can release it,
145+
// allowing other callers to use it simultaneously.
146+
let mut store = [AlignedType::ZERO; MAX_PREALLOC_SIZE];
147+
let buf = NonNull::new(store.as_mut_ptr() as *mut c_void).unwrap();
148+
149+
let ctx = match SECP256K1.try_lock() {
150+
None => unsafe {
151+
// If we can't get the lock, just do everything on the stack.
152+
ffi::secp256k1_context_preallocated_create(buf, AllPreallocated::FLAGS)
153+
},
154+
Some(ref mut guard) => unsafe {
155+
// If we *can* get the lock, use it and update it.
156+
let (ref mut store, ref mut ctx) = **guard;
157+
let global_ctx = ctx.get_or_insert_with(|| {
158+
let buf = NonNull::new(store.as_mut_ptr() as *mut c_void).unwrap();
159+
ffi::secp256k1_context_preallocated_create(buf, AllPreallocated::FLAGS)
160+
});
161+
ffi::secp256k1_context_preallocated_clone(global_ctx.as_ptr(), buf)
162+
},
163+
};
164+
// The lock is now dropped. Call the function.
165+
let ret = f(ctx);
166+
// ...then rerandomize the local copy, and try to replace the global one
167+
// with this. There are three cases for how this can work:
168+
//
169+
// 1. In the happy path, we succeeded in getting the lock above, have
170+
// a copy of the global context, are rerandomizing and storing it.
171+
// Great.
172+
// 2. Same as above, except that another thread is doing the same thing
173+
// in parallel. Now we both have copies that we're rerandomizing, and
174+
// both will try to store it. One of us will clobber the other, wasting
175+
// work but otherwise not causing any problems.
176+
// 3. If we -failed- to get the lock above, we are rerandomizing a fresh
177+
// copy of the context object. This may "undo" previous rerandomization.
178+
// In theory if an attacker is able to reliably and repeatedly trigger
179+
// this situation, they will have defeated the rerandomization. Since
180+
// this is a defense-in-depth measure, we will accept this.
181+
if let Some(seed) = rerandomize_seed {
182+
// Safety: this is a FFI call. It's fine.
183+
unsafe {
184+
assert_eq!(ffi::secp256k1_context_randomize(ctx, seed.as_ptr()), 1);
185+
}
186+
if let Some(ref mut guard) = SECP256K1.try_lock() {
187+
let (ref mut global_store, ref mut global_ctx_ptr) = **guard;
188+
unsafe {
189+
ffi::secp256k1_context_preallocated_clone(
190+
ctx.as_ptr(),
191+
NonNull::new(global_store.as_mut_ptr() as *mut _).unwrap(),
192+
);
193+
}
194+
195+
// 2. Update the pointer to refer to the *global* buffer, **not** the stack
196+
*global_ctx_ptr =
197+
Some(NonNull::new(global_store.as_mut_ptr() as *mut ffi::Context).unwrap());
198+
}
199+
}
200+
ret
201+
}
202+
203+
/// Rerandomize the global context, using the given data as a seed.
204+
///
205+
/// The provided data will be mixed with the entropy from previous calls in a timing
206+
/// analysis resistant way. It is safe to directly pass secret data to this function.
207+
pub fn rerandomize_global_context(seed: &[u8; 32]) {
208+
with_raw_global_context(|_| {}, Some(seed))
209+
}
210+
}
211+
13212
#[cfg(feature = "std")]
14213
mod internal {
15214
use std::cell::RefCell;
@@ -109,7 +308,6 @@ mod internal {
109308
});
110309
}
111310
}
112-
#[cfg(feature = "std")]
113311
pub use internal::{rerandomize_global_context, with_global_context, with_raw_global_context};
114312

115313
#[cfg(all(feature = "global-context", feature = "std"))]
@@ -471,7 +669,8 @@ impl<'buf> Secp256k1<AllPreallocated<'buf>> {
471669
/// * The version of `libsecp256k1` used to create `raw_ctx` must be **exactly the one linked
472670
/// into this library**.
473671
/// * The lifetime of the `raw_ctx` pointer must outlive `'buf`.
474-
/// * `raw_ctx` must point to writable memory (cannot be `ffi::secp256k1_context_no_precomp`).
672+
/// * `raw_ctx` must point to writable memory (cannot be `ffi::secp256k1_context_no_precomp`),
673+
/// **or** the user must never attempt to rerandomize the context.
475674
pub unsafe fn from_raw_all(
476675
raw_ctx: NonNull<ffi::Context>,
477676
) -> ManuallyDrop<Secp256k1<AllPreallocated<'buf>>> {

src/lib.rs

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -184,16 +184,13 @@ pub use secp256k1_sys as ffi;
184184
#[cfg(feature = "serde")]
185185
pub use serde;
186186

187-
#[cfg(feature = "std")]
188187
pub use crate::context::{
189-
rerandomize_global_context, with_global_context, with_raw_global_context,
188+
rerandomize_global_context, with_global_context, with_raw_global_context, AllPreallocated,
189+
Context, PreallocatedContext, SignOnlyPreallocated, Signing, Verification,
190+
VerifyOnlyPreallocated,
190191
};
191192
#[cfg(feature = "alloc")]
192193
pub use crate::context::{All, SignOnly, VerifyOnly};
193-
pub use crate::context::{
194-
AllPreallocated, Context, PreallocatedContext, SignOnlyPreallocated, Signing, Verification,
195-
VerifyOnlyPreallocated,
196-
};
197194
use crate::ffi::types::AlignedType;
198195
use crate::ffi::CPtr;
199196
pub use crate::key::{InvalidParityValue, Keypair, Parity, PublicKey, SecretKey, XOnlyPublicKey};

0 commit comments

Comments
 (0)