Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a volume rendering sample with 3D textures #407

Merged
merged 3 commits into from
Mar 28, 2024

Conversation

mehmetoguzderin
Copy link
Contributor

@mehmetoguzderin mehmetoguzderin commented Mar 26, 2024

oguz-20240327-volume3d

First of all, this PR adds:

  • A sample,
  • A data,
  • A data conversion script,
  • A README for the respective data directory.

The sample works with the latest versions of preview or nightly builds of Chrome, Firefox, and Safari with appropriate flags.

Although this sample is quite simple, I would like to explain some design decisions that created the content as it is.

  • Naming
    • I chose volumeRenderingTexture3D because volume rendering can be done with 3D textures, sparse octrees, etc.; this naming makes it explicit for anyone looking for either volume rendering application or texture 3D usage.
  • Data type
    • I chose organic, biological data because this type of photometric data is tough to partialize, making it necessary to keep a dense volumetric shape if one desires to expose control to the user (for example, fluid simulations are also suitable for volume, but more often than not, meshing works wonders for approximation, whereas meshing of even a simple medical data requires highly sophisticated segmentation which loses information). Furthermore, this type of rendering is beneficial for the community at large, yet missing from more fundamental sample repositories like this, causing people with less exposure to opt into libraries or pipelines where it actually loads directly.
  • Data source
    • Initially, I hoped to use a more industrial scan, but unfortunately, the situation with open-access data was not very helpful. And for human data, ethics is always a risk, even if there is consent; since this sample makes it more public, virtual mutilation would make it hard to OK. With luck, I came across BrainWeb, which contains a very representative, entirely simulated human brain scan with the skull.
  • Data acknowledgment
    • Since data is everything for this sample, I made sure to explicitly reference the page in the sample description, and give a complete list of references in the README of data directory.
  • Data processing
    • I made a little script to process the data to a size that can work with 2D compressed block sizes. If that becomes a possibility one day, then I will convert it to a more suitable layout and gzip. This script has certain redundancies in steps, but I'd advocate keeping those for the potential of plugging the slice compression one day (I do have that complete working with Metal and Vulkan directly, just omission at this point).
  • DecompressionStream
    • Since DecompressionStream is now available widely, I employ it to reduce the size of the volume binary. This approach takes data from 7.1 MB to 3.8 MB (the rate is even better with ASTC and BC4, but not possible right now) without significant decompression code bloat.
  • API use
    • To make it easier for the reader, I tried to keep the code almost the same as other samples rather than deviation, particularly helloTriangleMSAA and rotatingCube.
  • Multisampling in the pipeline
    • This choice makes the visual smoother without pixelation and bloat on the shader to do multiple rays.
  • Ray generation
    • To generate rays, I take advantage of inverse MVP in the vertex shader to determine start, end, and step per-vertex rather than per pixel and then use the start and step in the fragment shader to reduce burden, significantly simplifying the loop. This approach has the bonus of using a projection matrix start and end as the clipping mechanism rather than passing it by another uniform or extracting it from the matrix, which is super valuable for volume rendering.
  • Ray traversal
    • Instead of doing a ray-AABB test, I just rely on the fact that near-far is close enough and then decide to use a sample or not with a select statement.
  • No dithering
    • Dithering is a natural step up to improve visuals, but I think this portion can be better left to the reader; the number of steps already compensates for making a relatively OK image.
  • No further operations
    • Secondary rays or LUTs can provide more exciting visuals, but I think keeping the fundamentals is better as these would partialize the sample, making it better for the looks but a bit more complex for the reads.
  • No mipmaps
    • I think compression is a better value to demonstrate in this demo. Hence, I decided not to use mipmap to keep calculations simple while preserving the original resolution to add alternative data projection modes, for which mipmap would need to change (max, min, avg, etc.).
  • Visual validation
    • I did validate that the only visuals match the outcome of the sample through VTK. I did not include that script in the repository to avoid adding too many tools just for this particular sample.
  • Data README
    • I wrote data README in Asciidoc for table of contents convenience and a bit more flexibility in its syntax for expression of tables just in case it becomes necessary for other volume data.
  • Data .gitignore
    • I made the gitignore of data processing utility local to the data directory to avoid bloating top-level gitignore with specific naming.

Since this PR is my first attempt to add a sample for this repository, I would like to update the code accordingly after carefully reading any feedback. Please point out any aspect that would make this sample a better value proposition for the community. Thank you.

Fixes #363

@mehmetoguzderin
Copy link
Contributor Author

I uploaded a small change to reduce the number of samples to 64 (where visual quality is in reasonable preservation and the ability to explore structures by closing the near-far gap) and fix the outdated path and name to the data processing script (the URL should work once upstream). With that, the sample should be ready for review and feedback. Thank you very much.

Copy link
Collaborator

@greggman greggman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great!

I just had some nits. Feel free to ignore them if you think the code is better as is.

sample/volumeRenderingTexture3D/main.ts Outdated Show resolved Hide resolved
sample/volumeRenderingTexture3D/main.ts Outdated Show resolved Hide resolved
sample/volumeRenderingTexture3D/main.ts Outdated Show resolved Hide resolved
sample/volumeRenderingTexture3D/main.ts Outdated Show resolved Hide resolved
sample/volumeRenderingTexture3D/main.ts Outdated Show resolved Hide resolved
@mehmetoguzderin
Copy link
Contributor Author

@greggman Thank you very much for your review; I think these are all great improvements, so I made a commit to include all of the suggestions with an npm run fix to fit with CI, too!

Unless there is something I forgot, I think it should be OK to merge if it looks OK! Thank you for your support in advance.

@greggman greggman merged commit 67eec2d into webgpu:main Mar 28, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sample using 3D textures
2 participants