Description
There are a few places I've seen that might be able improve in terms of performance - though I've noticed this works well on mobile already (at least my Samsung Galaxy from 2020), which is great! But here are some ideas to add a bit more overhead. I can help with some of these changes, as well, if any of them sound interesting:
Fewer Render Passes
I noticed that using the ToneMapping
component with the r3f effect composer will result in two (!!) more render passes than necessary. I've made some issues in the upstream repo to address this - pmndrs/react-postprocessing#304 & pmndrs/react-postprocessing#305 - so I'm not sure if there's much to do in this repo. One current workaround is correct the ToneMapping component attributes:
<ToneMapping ref={ pass => pass.attributes = 0 } /* ... */ />
Render Sky In AerialPerspective Effect
Right now the Sky
component is rendered effectively as a full screen pass before everything else in the scene. However a lot of times the sky won't be at all visible due to the camera looking at the terrain, resulting in a wasted full screen render. Instead the sky could be rendered in the same shader pass as the AerialPerspectiveEffect
so cut down on the amount of pixel output. I believe this should easily be an improvement.
Alternatively we could mark the Sky
component render order to a higher number so it always renders after the terrain and the fragment shader can always skip running if the depth test fails.
Derive Normals from Depth
I'm not exactly sure how EffectComposer handles this but you can get a depth texture pass for "free" with all three.js shaders using a render target DepthTexture - meaning it would also work with the dithering fade effect (see #3 (comment)), though of course it won't be as precise as real normals. This would let you remove the need for a separate NormalPass in post processing and still get normals without MRT - until something like this is better supported in three.js.
Let me know what you think!