Skip to content

Conversation

@AlexD15216
Copy link
Contributor

The write pointer was being incremented inside the for-loop that iterates
over every tensor in the batch. When more than one tensor is present (common situation in RL), this
advanced the pointer multiple times for the same batch, leaving gaps and
eventually skipping slots in the ring buffer.

Because all tensors share the same first dimension (batch size), the
pointer should move once after the entire batch has been copied.
The increment logic is now placed after the loop so the index grows
by num_samples exactly once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant