Skip to content

Make work packet buffer size configurable from one location #1285

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 24, 2025
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/plan/tracing.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
//! This module contains code useful for tracing,
//! i.e. visiting the reachable objects by traversing all or part of an object graph.

use crate::scheduler::gc_work::{ProcessEdgesWork, SlotOf};
use crate::scheduler::gc_work::{EDGES_WORK_BUFFER_SIZE, ProcessEdgesWork, SlotOf};
use crate::scheduler::{GCWorker, WorkBucketStage};
use crate::util::ObjectReference;
use crate::vm::SlotVisitor;
Expand All @@ -25,7 +25,7 @@ pub struct VectorQueue<T> {

impl<T> VectorQueue<T> {
/// Reserve a capacity of this on first enqueue to avoid frequent resizing.
const CAPACITY: usize = 4096;
const CAPACITY: usize = EDGES_WORK_BUFFER_SIZE;

/// Create an empty `VectorObjectQueue`.
pub fn new() -> Self {
Expand Down
8 changes: 7 additions & 1 deletion src/scheduler/gc_work.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ use crate::*;
use std::marker::PhantomData;
use std::ops::{Deref, DerefMut};

/// Buffer size for [`ProcessEdgesWork`] work packets. This constant is exposed to users so that
/// they can use this value for places in their binding that interface with the work packet system,
/// specifically the transitive closure via `ProcessEdgesWork` work packets such as roots gathering
/// code or weak reference processing.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the key statement is missing: we expect users to split their workload by this buffer size for process-edges-related work, so each work packet should include at most this many items in the buffer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

pub const EDGES_WORK_BUFFER_SIZE: usize = 4096;

pub struct ScheduleCollection;

impl<VM: VMBinding> GCWork<VM> for ScheduleCollection {
Expand Down Expand Up @@ -556,7 +562,7 @@ pub trait ProcessEdgesWork:
/// Higher capacity means the packet will take longer to finish, and may lead to
/// bad load balancing. On the other hand, lower capacity would lead to higher cost
/// on scheduling many small work packets. It is important to find a proper capacity.
const CAPACITY: usize = 4096;
const CAPACITY: usize = EDGES_WORK_BUFFER_SIZE;
/// Do we update object reference? This has to be true for a moving GC.
const OVERWRITE_REFERENCE: bool = true;
/// If true, we do object scanning in this work packet with the same worker without scheduling overhead.
Expand Down
Loading