Via the I2S thread, I started reading the explainer, and it seems like several best practices are missing and design directions are not enunciated:
- the end-user problem/benefit is not front-and-center
- the explainer includes proposed IDL, which should be relegated to a spec draft
- the explainer does not identify problems related to XR camera access that are not going to be addressed by the design (non-goals)
- the explainer is hand-wavey about why this is not being proposed as an extension to
getUserMedia()
- why does the explainer not provide a way to also upload "raw" image data to a WebGL texture?
- for a "raw" image, the API seems to lack color space controls/hints/output, and does not document the format of the resulting texture. Why not?
- how does this feature handle multiple cameras? Stereo cameras? Why are views and cameras always linked 1:1?
- how will this work in the context of Offscreen Canvas?
- the considered alternatives section only contains a single other design, when we can easily imagine many different designs
- the spec doc does not link to the explainer
- the explainer does not link to the spec
Via the I2S thread, I started reading the explainer, and it seems like several best practices are missing and design directions are not enunciated:
getUserMedia()