-
Notifications
You must be signed in to change notification settings - Fork 0
Description
To define the protocol, we must first agree on an object model. The part that we seem to (more or less) agree about can be illustrated like this:
The canvas is the toplevel window to draw something to. It can be a physical window on screen, or a virtual (off-screen) canvas. The space on the canvas is divided to accommodate for multiple visializations (what Matlab calls subplots). Note that there is one region for the actual scene, and another for the ticks, labels etc. Layout is tricky.
A visual is an object that describes one "thing" being visualized. E.g. a line, markers, an image, volume, mesh, etc. The actual data for a visual is stored in buffers and textures, so that data can easily be shared. Visuals have properties that affect their appearance. Each visual also has a transform.
A proposal: the view also defines the camera, the interaction model (orbit, trackball), and the toplevel visual (or a list of visuals if we don't have a scengraph).
Points to address:
- Name for canvas/window/figure (my fav is canvas).
- Name for view/subplot (my fav is view, since we may want to use "plot" to create line+markers).
- Do we leave layout for the client lib, or does it specify a high-level layout and leave the hard work for the backend?
- Do we support a scenegraph? I suspect that our target audience won't care much, and a client lib can implemented on top. Plus we can add it later, relatively easily, I think.
- Do we collect the appearance-props of a visual in a "material" as in TheeJS and PyGfx? That way you can have multiple objects with the same material that can be changed using one uniform update (if the backend supports this). Though it also complicates the protocol, and not a typical use-case.