-
Notifications
You must be signed in to change notification settings - Fork 0
Implements bsan_alloc and bsan_dealloc #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bsan
Are you sure you want to change the base?
Conversation
the pin to llvm probably has to be removed before merging. Because obraunsdorf/bsan_malloc is just for testing bsan_alloc using interceptors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of suggestions for changes.
@@ -11,34 +11,65 @@ use crate::*; | |||
/// of a singly-linked list. For this implementation to be sound, | |||
/// the pointer that is returned must not be mutated concurrently. | |||
pub unsafe trait Linkable<T: Sized> { | |||
fn next(&self) -> *mut *mut T; | |||
fn next(&mut self) -> *mut *mut T; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make the parameter something like node: *mut T
instead of &mut self
in case there's a situation where multiple threads could call this method. I don't think that's the case at the moment, since this should only be called in the critical section when we lock the free list, but I'm uncertain about the soundness of that w.r.t aliasing
@@ -30,8 +30,11 @@ use crate::*; | |||
#[derive(Debug)] | |||
pub struct GlobalCtx { | |||
hooks: BsanHooks, | |||
// TODO(obraunsdorf): Does the counter need to be AtomicU64, because it would weaken security |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like 32-bit platforms don't have support for AtomicU64
.
PowerPC and MIPS platforms with 32-bit pointers do not have
AtomicU64
orAtomicI64
types.1
Footnotes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We decided not to support 32-bit platforms for now, so we should replace this with AtomicU64
.
fn dealloc(&mut self, borrow_tag: TreeBorrows::BorrowTag) { | ||
self.alloc_id = AllocId::invalid(); | ||
self.tree_address.deallocate(self.base_addr(), borrow_tag); | ||
//TODO(obraunsdorf) free the allocation of the tree itself |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, you could have dealloc
take self
(assuming that you don't need to access self
after this function).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any allocation that could be accessed concurrently can never be mutably borrowed, because this would lead to undefined behavior. For example, one thread take a mutable borrow and is blocked, a second thread takes a mutable borrow and uses it to write, and then the first thread is unblocked and tries to mutate with its now-invalid reference.
We should have this function take &self
, and then make both alloc_id
and tree_address
interior mutable. We can already do this for alloc_id
by switching to AtomicU64
. The tree_address
should be put behind a mutex such that it is interior mutable. We want to do this without increasing the size of our allocation metadata struct. We have a couple of options.
- Place the mutex behind a raw pointer; e.g.
*mut Mutex<Tree>
- Add a single atomic flag (
AtomicU8
) to be used as a lock, similar to what's already happening within the bump allocator.
Additionally, we can't use std::sync::Mutex,
so we'll need to create our own mutex primitive, or use atomics for this. Maybe, the tree could have a build-in mutex, so that we only need *mut Tree
.
220a58f
to
81e1951
Compare
Implements #2