Skip to content

sdl3::gpu refactor #158

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft

Conversation

quaternic
Copy link
Contributor

@quaternic quaternic commented Apr 7, 2025

While trying out the gpu-module, I hit various problems including #147 . That led me to do some experimenting with the API design, as implemented in this PR.

This definitely isn't finalized, but I'm posting it here since the API design needs discussion.

The most fundamental idea is to define the Rust equivalents for SDL's opaque types like

pub type Buffer = Extern<SDL_GPUBuffer>;

where Extern is just a transparent wrapper. This means that the Rust-side can pass around &Buffer, normal Rust references, and under the hood that is just the same *mut SDL_GPUBuffer that the SDL calls work with. This makes the abstraction a lot "thinner", when e.g. a function binding storage buffers can take a &[&Buffer] and just give the slice's pointer and length directly to SDL.

When such resources are created, you receive an Owned<'_, T>, which borrows the &Device it was obtained from, and will release the resource when dropped. It also derefs to &T.

The different CommandBuffer passes, RenderPass, CopyPass and ComputePass, are all mutually exclusive on a given command buffer at a time, so these are provided as methods on &mut CommandBuffer which take a closure for what the user wants to do with the pass. The same pattern is also used for cpu-side access to a TransferBuffer.

Of course not everything is so clear cut; for example, the command buffer itself should be either submitted or cancelled, and the latter only if you haven't obtained a swapchain on it yet. So should its Drop auto-submit?

Any thoughts on how resource management should work with this crate more generally?

@gustafla
Copy link
Contributor

I can't answer any of your questions but I like your idea very much and wish that the current API abstractions were better. I have a (still private) project using this crate and the GPU API, I've been thinking about refactoring my application code against this PR in order to provide feedback. If you're still interested in moving this forward, how about I'll try it next weekend?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants