-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add compute shaders #7345
Comments
Hey, @RandomGamingDev, initComputeFBO(width, height) {
const gl = this._renderer.GL;
this._computeFramebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, this._computeFramebuffer);
// Creating Texture to store compute shader results
this._computeTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, this._computeTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
// FBO implementation
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, this._computeTexture, 0);
// Check for FBO completeness
if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) !== gl.FRAMEBUFFER_COMPLETE) {
console.error("Failed to initialize compute framebuffer");
}
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
}
computeShader(width, height, callback) {
const gl = this._renderer.GL;
gl.bindFramebuffer(gl.FRAMEBUFFER, this._computeFramebuffer);
gl.viewport(0, 0, width, height);
if (callback) callback(this._computeTexture);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
}
|
@davepagurek any advice on this? or something I should change? |
@davepagurek , if no one's worknig on this can I try solving it? |
hi @Rishab87 and everyone! I think before we jump right into implementation, there are a few details to iron out, which I'd love all of your help with if you're interested!
Let me know what your thoughts are! |
@davepagurek , I'm more inclined towards fragment shader approach as we can use some existing p5.js code and overall its a great balance between being powerful being easy to understand Talking about interface exposed to users:
and then get the output like this:
Internally we'll be setting up FBO, initialize particle data and after computation swapping input and output fbos To handle more outputting values than we can fit in one pixel, maybe we can use multiple pixels to represent a single particle though I'm not sure Object Model: I'm completely new to shaders, webgl etc so please correct me if I said anything wrong! |
That sounds good so far! Right now it works well because position and velocity in 2D pack perfectly into a floating point vec4. Would this setup work easily if the simulation was in 3D? Or even if you just had position? From a technical standpoint, it gets a lot harder to output data that takes more than one pixel to store. This may not be the best way, but one idea could be, if we make the user just specify a function that returns a struct with the state data. Possibly even just all floats for simplicity, e.g.: struct State {
float px;
float py;
float pz;
float vx;
float vy;
float vz;
};
State compute() {
State result;
result.px = /* something */;
// ...
return result;
} ...then, under the hood, we could interleave the outputs into adjacent pixels. So we'd automatically generate a main function that looks something like: void main() {
State result = compute();
if (mod(gl_FragCoord.x, 2.0) == 0.0) {
gl_FragColor = vec4(result.px, result.py, result.pz, result.vx);
} else {
gl_FragColor = vec4(result.vy, result.vz, 0.0, 0.0);
}
} That could let us output more data than one pixel can hold, but also would then require us to automatically generate something similar to decode texture data (e.g. Anyway, thats just one idea to consider! I'm open to others too. |
yes, you're right my solution was kind of rigid, a user defined struct is a great approach, user api will look like something like this then?: const instance = new computeShader({
properties: {
position: { type: 'vec3' },
velocity: { type: 'vec3' },
mass: { type: 'float' },
color: { type: 'vec4' }
},
computeFunction: `
void updateParticle(inout vec3 position, inout vec3 velocity, inout float mass, inout vec4 color) {
position += velocity;
// ... other update logic
}
`
}); based on this we'll generate some glsl code and generate a function to get the state of a particle and then user can access particels something like this: let particles = particleSystem.getParticles();
point(particle.position[0], particle.position[1], particle.position[2]); Overall this approach sounds good to me! |
Increasing access
Although WebGL doesn't have official compute shaders, they can be emulated using a few vector shaders, a FBO, and fragment shaders for the actual calculations.
While p5.js's focus isn't computation, this would be perfect for many types of rendering (e.g. raytracing, raymarching, and certain types of culling). It wouldn't provide a speed benefit compared to doing it yourself, but it would mean less boilerplate being required, allow for computations for computation visualizations which are also popular in p5.js, make it easier to create more advanced graphics in p5.js, and also introduce a lot of beginners to the topic of compute shaders (part of p5.js's key principles is to help beginners, which is also part of why shaders, and attempts to make shaders easier, which is why I think this would work well).
Most appropriate sub-area of p5.js?
Feature request details
Create computer shader equivalents to
createShader()
andloadShader()
(e.g.createComputeShader()
andloadComputeShader()
). p5.js would handle the boilerplate in terms of setting up the vertex shaders, part of the fragment shader, and FBO, meaning that the user would only get specific variable inputs and outputs, with the output getting written to aTypedArary
buffer.The text was updated successfully, but these errors were encountered: