Skip to content

Implements bsan_alloc and bsan_dealloc #16

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: bsan
Choose a base branch
from

Conversation

obraunsdorf
Copy link

Implements #2

@obraunsdorf obraunsdorf requested a review from icmccorm April 23, 2025 13:40
@obraunsdorf
Copy link
Author

the pin to llvm probably has to be removed before merging. Because obraunsdorf/bsan_malloc is just for testing bsan_alloc using interceptors.
Would be nice to have the static instrumentation for malloc/free (see BorrowSanitizer/docs#8)

Copy link
Collaborator

@icmccorm icmccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of suggestions for changes.

@@ -11,34 +11,65 @@ use crate::*;
/// of a singly-linked list. For this implementation to be sound,
/// the pointer that is returned must not be mutated concurrently.
pub unsafe trait Linkable<T: Sized> {
fn next(&self) -> *mut *mut T;
fn next(&mut self) -> *mut *mut T;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make the parameter something like node: *mut T instead of &mut self in case there's a situation where multiple threads could call this method. I don't think that's the case at the moment, since this should only be called in the critical section when we lock the free list, but I'm uncertain about the soundness of that w.r.t aliasing

@@ -30,8 +30,11 @@ use crate::*;
#[derive(Debug)]
pub struct GlobalCtx {
hooks: BsanHooks,
// TODO(obraunsdorf): Does the counter need to be AtomicU64, because it would weaken security
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like 32-bit platforms don't have support for AtomicU64.

PowerPC and MIPS platforms with 32-bit pointers do not have AtomicU64 or AtomicI64 types.1

Footnotes

  1. https://doc.rust-lang.org/std/sync/atomic/#portability

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We decided not to support 32-bit platforms for now, so we should replace this with AtomicU64.

fn dealloc(&mut self, borrow_tag: TreeBorrows::BorrowTag) {
self.alloc_id = AllocId::invalid();
self.tree_address.deallocate(self.base_addr(), borrow_tag);
//TODO(obraunsdorf) free the allocation of the tree itself
Copy link
Collaborator

@icmccorm icmccorm Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, you could have dealloc take self (assuming that you don't need to access self after this function).

Copy link
Collaborator

@icmccorm icmccorm May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any allocation that could be accessed concurrently can never be mutably borrowed, because this would lead to undefined behavior. For example, one thread take a mutable borrow and is blocked, a second thread takes a mutable borrow and uses it to write, and then the first thread is unblocked and tries to mutate with its now-invalid reference.

We should have this function take &self, and then make both alloc_id and tree_address interior mutable. We can already do this for alloc_id by switching to AtomicU64. The tree_address should be put behind a mutex such that it is interior mutable. We want to do this without increasing the size of our allocation metadata struct. We have a couple of options.

  • Place the mutex behind a raw pointer; e.g. *mut Mutex<Tree>
  • Add a single atomic flag (AtomicU8) to be used as a lock, similar to what's already happening within the bump allocator.

Additionally, we can't use std::sync::Mutex, so we'll need to create our own mutex primitive, or use atomics for this. Maybe, the tree could have a build-in mutex, so that we only need *mut Tree.

@icmccorm icmccorm force-pushed the bsan branch 3 times, most recently from 220a58f to 81e1951 Compare May 21, 2025 20:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants