Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Improvement Ideas #24

Open
gkjohnson opened this issue Dec 16, 2024 · 9 comments
Open

Performance Improvement Ideas #24

gkjohnson opened this issue Dec 16, 2024 · 9 comments
Labels
package: atmosphere Specific to @takram/three-atmosphere

Comments

@gkjohnson
Copy link
Contributor

There are a few places I've seen that might be able improve in terms of performance - though I've noticed this works well on mobile already (at least my Samsung Galaxy from 2020), which is great! But here are some ideas to add a bit more overhead. I can help with some of these changes, as well, if any of them sound interesting:

Fewer Render Passes

I noticed that using the ToneMapping component with the r3f effect composer will result in two (!!) more render passes than necessary. I've made some issues in the upstream repo to address this - pmndrs/react-postprocessing#304 & pmndrs/react-postprocessing#305 - so I'm not sure if there's much to do in this repo. One current workaround is correct the ToneMapping component attributes:

<ToneMapping ref={ pass => pass.attributes = 0 } /* ... */ />

Render Sky In AerialPerspective Effect

Right now the Sky component is rendered effectively as a full screen pass before everything else in the scene. However a lot of times the sky won't be at all visible due to the camera looking at the terrain, resulting in a wasted full screen render. Instead the sky could be rendered in the same shader pass as the AerialPerspectiveEffect so cut down on the amount of pixel output. I believe this should easily be an improvement.

Alternatively we could mark the Sky component render order to a higher number so it always renders after the terrain and the fragment shader can always skip running if the depth test fails.

Derive Normals from Depth

I'm not exactly sure how EffectComposer handles this but you can get a depth texture pass for "free" with all three.js shaders using a render target DepthTexture - meaning it would also work with the dithering fade effect (see #3 (comment)), though of course it won't be as precise as real normals. This would let you remove the need for a separate NormalPass in post processing and still get normals without MRT - until something like this is better supported in three.js.

Let me know what you think!

@shotamatsuda
Copy link
Member

shotamatsuda commented Dec 16, 2024

I haven’t addressed everything yet. Below are just my initial impressions.

Fewer Render Passes

I didn’t notice that ToneMapping is marked as a convolution. Yes, this separates the pass and requires another render. I’m not sure why they marked it that way.

Render Sky In AerialPerspective Effect

I agree it’s better to render the sky (or radiance at infinite distance) within AerialPerspectiveEffect (see the comment). But I think rendering it in a post-processing effectively disables transparency.

Alternatively we could mark the Sky component render order to a higher number so it always renders after the terrain and the fragment shader can always skip running if the depth test fails.

I don’t fully understand this. Does it mean rendering the sky after everything else and performing depth culling in the shader, or are fragments natively skipped when the depth test fails?

Derive Normals from Depth

I don’t fully understand this either. There’s an option reconstructNormal that computes normals from depth instead of reading it from the normal buffer. Is that what you’re referring to? The downside is that normals aren’t interpolated, which is what TileCreasedNormalsPlugin is intended for.

@shotamatsuda
Copy link
Member

shotamatsuda commented Dec 16, 2024

But I think rendering it in a post-processing effectively disables transparency.

In the "deferred" setup, transparency is already broken, though.

@gkjohnson
Copy link
Contributor Author

But I think rendering it in a post-processing effectively disables transparency.

What do you mean by this? Do you mean canvas transparency - effectively letting HTML, CSS, etc show through the "sky" portion of the image? I see that the "sky.frag" shader already is not outputting any non-1.0 alpha value so enabling transparency would mean not rendering the sky, right? If so then yes transparency would no longer work assuming users are omitting the sky component to enable transparency. But this can also be enabled with a setting that toggles #define for the shader.

I don’t fully understand this. Does it mean rendering the sky after everything else and performing depth culling in the shader, or are fragments natively skipped when the depth test fails?

Right now the Sky component is rendered with renderOrder = - 1 which means it always renders first before anything is on the screen meaning the fragment shader runs for every pixel. Typically you want to render opaque objects near to far so anything in front of other objects has already written to the depth buffer and the work for the pixel can be ended early, preventing unnecessary fragment shader work. This optimization of checking the output depth before running the fragment shader is called Early Z Rejection.

As an aside, if a fragment shader writes to gl_FragDepth to adjust the depth output then the GPU can no longer perform this early z rejection optimization. Three.js' logarithmic depth buffer setting works by writing to the fragment depth field, which is why I try to avoid using it when I can.

I don’t fully understand this either. There’s an option reconstructNormal that computes normals from depth instead of reading it from the normal buffer. Is that what you’re referring to? The downside is that normals aren’t interpolated, which is what TileCreasedNormalsPlugin is intended for.

Oh I see! But you're right that the normals won't be interpolated, then 🤔 There's probably not as much to do here in that case, then.

@shotamatsuda
Copy link
Member

shotamatsuda commented Dec 16, 2024

I mean transparency for materials marked as transparent, like in the image below. It’s not working great right now anyways (I noticed that transmission is excessively affected by the sun, possibly due to large values in the color buffer, and so on), but rendering the sky in the post-process completely loses this transparency (aside from alpha hash).

This optimization of checking the output depth before running the fragment shader is called Early Z Rejection.

I see. I thought this optimization wasn’t done natively on the GPU (meaning every fragment is processed anyways, then depth tested and discarded as necessary). But if that's the case, enabling depth test and rendering it after the other scene objects makes sense to me. In that case, we'll need a depth value for the sky. The depth test in the aerial perspective and stars will be affected. I’m good with that, and will look into it.

@shotamatsuda
Copy link
Member

In that case, we'll need a depth value for the sky.

Ah, it’s not necessary.

@shotamatsuda
Copy link
Member

shotamatsuda commented Dec 16, 2024

I didn’t notice that ToneMapping is marked as a convolution. Yes, this separates the pass and requires another render. I’m not sure why they marked it that way.

I remember there was "adaptive" tone mapping mode (maybe there still is), and it involved convolution to obtain the mean luminance.

@gkjohnson
Copy link
Contributor Author

I mean transparency for materials marked as transparent, like in the image below.

OH. I see now - yes this is something that would no longer work. It might be good to provide an option for implementing this in the aerial perspective fullscreen pass or using the "Sky" component.

I thought this optimization wasn’t done natively on the GPU (meaning every fragment is processed anyways, then depth tested and discarded as necessary)

I believe it's implementation-dependent but I'd expect it to be fairly common.

In that case, we'll need a depth value for the sky.

Ah, it’s not necessary.

I'd think we'd still want to figure out how to make sure the fullscreen pass is rendered at the furthest possible depth bit but I'll have to look into how to do that correctly.

I remember there was "adaptive" tone mapping mode (maybe there still is), and it involved convolution to obtain the mean luminance.

If this is the case then it would make more sense to only mark the pass a "CONVOLUTION" when adaptive tonemapping is enabled. I'll see if I can ask the contributors to the project more directly if we don't hear back this week.

@shotamatsuda shotamatsuda added the package: atmosphere Specific to @takram/three-atmosphere label Dec 16, 2024
@shotamatsuda
Copy link
Member

Render Sky In AerialPerspective Effect

Added in e31e2d2

@CodyJasonBennett
Copy link

Awesome project.

I resolved an issue with extraneous passes upstream, which should tame the bandwidth cost: pmndrs/react-postprocessing#305.

I also remember sharing some resources on Twitter to help with quality/performance from a sampling perspective, but there's a lot to be done that isn't explored so well. It's not too hard to do something novel here if you give it time. SoTA sky systems don't converge in real-time, and 4D simulation formats just aren't sparse enough for the web. I hope to share more insights in time with my research and experiments, but I utilize compute shader features that can't be emulated in WebGL and texture features that are plagued with ANGLE bugs.

For further reference, checkout the following articles for depth -> normals derivation. Hi-Z will also help with god rays, but you may need to reproject depth so it fits in the pipeline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
package: atmosphere Specific to @takram/three-atmosphere
Projects
None yet
Development

No branches or pull requests

3 participants