diff --git a/README.md b/README.md index ec2e85dfd..519d41e26 100644 --- a/README.md +++ b/README.md @@ -13,28 +13,15 @@ Kaolin library is part of a larger suite of tools for 3D deep learning research. Visit the [Kaolin Library Documentation](https://kaolin.readthedocs.io/en/latest/) to get started! -## About the Latest Release (0.10.0) +## About the Latest Release (0.12.0) -With the version 0.10.0 we are focusing on Volumetric rendering, adding new features for tetrahedral meshes, including DefTet volumetric renderer and losses, and Deep Marching Tetrahedrons, and adding new primitive operations for efficient volumetric rendering of Structured Point Clouds, we are also adding support for materials with USD importation. +With the version 0.12.0 we have added a [Camera API](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html), allowing to use all our renderers and multiple coordinate systems. -Finally we are adding two new tutorials to show how to use the latest features from Kaolin. See [Tutorial Index](https://kaolin.readthedocs.io/en/latest/notes/tutorial_index.html) for more. -* [How to use DMtet](examples/tutorial/dmtet_tutorial.ipynb) to rencontruct a mesh from point clouds generated by the Omniverse Kaolin App +Checkout our news [tutorials](https://kaolin.readthedocs.io/en/latest/notes/tutorial_index.html): + * plenty of camera recipes in [examples/recipes/camera](./examples/recipes/camera) + * a tutorial using the new camera API with nvdiffrast [examples/tutorial/camera_and_rasterization.ipynb](./examples/tutorial/camera_and_rasterization.ipynb) -

- - DMtet Tutorial - -

- -* An [Introduction to Structured Point Clouds](examples/tutorial/understanding_spcs_tutorial.ipynb), with conversion from mesh and interactive visualization with raytracing. - -

- - SPC Tutorial - -

- -See [change logs](https://github.com/NVIDIAGameWorks/kaolin/releases/tag/v0.10.0) for details. +See [change logs](https://github.com/NVIDIAGameWorks/kaolin/releases/tag/v0.12.0) for details. ## Contributing @@ -42,6 +29,8 @@ Please review our [contribution guidelines](CONTRIBUTING.md). ## External Projects using Kaolin +* [NVIDIA Kaolin Wisp](https://github.com/NVIDIAGameWorks/kaolin-wisp): + * Use [Camera API](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html), [Structured Point Clouds](https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html) and its [rendering capabilities](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.spc.html) * [gradSim: Differentiable simulation for system identification and visuomotor control](https://github.com/gradsim/gradsim): * Use [DIB-R rasterizer](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.mesh.html#kaolin.render.mesh.dibr_rasterization), [obj loader](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.obj.html#kaolin.io.obj.import_mesh) and [timelapse](https://kaolin.readthedocs.io/en/latest/modules/kaolin.visualize.html#kaolin.visualize.Timelapse) * [Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer](https://github.com/nv-tlabs/DIB-R-Single-Image-3D-Reconstruction/tree/2cfa689881145c8e0647ae8dd077e55b5a578658): diff --git a/ci/gitlab_jenkins_templates/ubuntu_test_CI.jenkins b/ci/gitlab_jenkins_templates/ubuntu_test_CI.jenkins index 85bed3b57..4e70cab6e 100644 --- a/ci/gitlab_jenkins_templates/ubuntu_test_CI.jenkins +++ b/ci/gitlab_jenkins_templates/ubuntu_test_CI.jenkins @@ -93,6 +93,14 @@ spec: build_passed = false echo e.toString() } + try { + stage("Camera and Rasterization Tutorial") { + sh 'cd /kaolin/examples/tutorial && ipython camera_and_rasterization.ipynb' + } + } catch(e) { + build_passed = false + echo e.toString() + } try { stage("SPC from Pointcloud Recipe") { sh 'cd /kaolin/examples/recipes/dataload/ && python spc_from_pointcloud.py' diff --git a/docs/modules/kaolin.ops.spc.rst b/docs/modules/kaolin.ops.spc.rst index cb1024ecf..e9fa9344a 100644 --- a/docs/modules/kaolin.ops.spc.rst +++ b/docs/modules/kaolin.ops.spc.rst @@ -3,257 +3,6 @@ kaolin.ops.spc ############## -.. _spc: - -Structured Point Clouds -*********************** - -Structured Point Clouds (SPC) is a sparse octree-based representation that is useful to organize and -compress 3D geometrically sparse information. -They are also known as sparse voxelgrids, quantized point clouds, and voxelized point clouds. - -Kaolin supports a number of operations to work with SPCs, -including efficient ray-tracing and convolutions. - -The SPC data structure is very general. In the SPC data structure, octrees provide a way to store and efficiently retrieve coordinates of points at different levels of the octree hierarchy. It is also possible to associate features to these coordinates using point ordering in memory. Below we detail the low-level representations that comprise SPCs and allow corresponding efficient operations. We also provide a :ref:`convenience container` for these low-level attributes. - -Some of the conventions are also defined in `Neural Geometric Level of Detail: Real-time Rendering with -Implicit 3D Surfaces `_ which uses SPC as an internal representation. - -.. warning:: - Structured Point Clouds internal layout and structure is still experimental and may be modified in the future. - -Octree -====== - -.. _spc_octree: - -Core to SPC is the `octree `_, a tree data -structure where each node have up to 8 childrens. -We use this structure to do a recursive three-dimensional space partitioning, -i.e: each node represents a partitioning of its 3D space (partition) of :math:`(2, 2, 2)`. -The octree then contains the information necessary to find the sparse coordinates. - -In SPC, a batch of octrees is represented as a tensor of bytes. Each bit in the byte array ``octrees`` represents -the binary occupancy of an octree bit sorted in `Morton Order `_. -The Morton order is a type of space-filling curve which gives a deterministic ordering of -integer coordinates on a 3D grid. That is, for a given non-negative 1D integer coordinate, there exists a -bijective mapping to 3D integer coordinates. - -Since a byte is a collection of 8 bits, a single byte ``octrees[i]`` -represents an octree node where each bit indicate the binary occupancy of a child node / partition as -depicted below: - -.. image:: ../img/octants.png - :width: 600 - -For each octree, the nodes / bytes are following breadth-first-search order (with Morton order for -childrens order), and the octree bytes are then :ref:`packed` to form ``octrees``. This ordering -allows efficient tree access without having to explicilty store indirection pointers. - -.. figure:: ../img/octree.png - :scale: 30 % - :alt: An octree 3D partitioning - - Credit: https://en.wikipedia.org/wiki/Octree - -The binary occupancy values in the bits of ``octrees`` implicitly encode position data due to the bijective -mapping from Morton codes to 3D integer coordinates. However, to provide users a more straight -forward interface to work with these octrees, SPC provides auxilary information such as -``points`` which is a :ref:`packed` tensor of 3D coordinates. Refer to the :ref:`spc_attributes` section -for more details. - -Currently SPCs are primarily used to represent 3D surfaces, -and so all the leaves are at the same ``level`` (depth). -This allow very efficient processing on GPU, with custom CUDA kernels, for ray-tracing and convolution. - -The structure contains finer details as you go deeper in to the tree. -Below are the Levels 0 through 8 of a SPC teapot model: - -.. image:: ../img/spcTeapot.png - -Additional Feature Data -======================= - -The nodes of the ``octrees`` can contain information beyond just the 3D coordinates of the nodes, -such as RGB color, normals, feature maps, or even differentiable activation maps processed by a -convolution. - -We follow a `Structure of Arrays `_ approach to store -additional data for maximum user extensibility. -Currently the features would be tensors of shape :math:`(\text{num_nodes}, \text{feature_dim})` -with ``num_nodes`` being the number of nodes at a specific ``level`` of the ``octrees``, -and ``feature_dim`` the dimension of the feature set (for instance 3 for RGB color). -Users can freely define their own feature data to be stored alongside SPC. - -Conversions -=========== - -Structured point clouds can be derived from multiple sources. - -We can construct ``octrees`` -from unstructured point cloud data, from sparse voxelgrids -or from the level set of an implicit function :math:`f(x, y, z)`. - -.. _spc_attributes: - -Related attributes -================== - -.. note:: - If you just wanna use the structured point clouds without having to go through the low level details, take a look at :ref:`the high level classes `. - -.. _spc_lengths: - -``lengths:`` ------------- - -Since ``octrees`` use :ref:`packed` batching, we need ``lengths`` a 1D tensor of size ``batch_size`` that contains the size of each individual octree. Note that ``lengths.sum()`` should equal the size of ``octrees``. You can use :func:`kaolin.ops.batch.list_to_packed` to pack octrees and generate ``lengths`` - -.. _spc_pyramids: - -``pyramids:`` -------------- - -:class:`torch.IntTensor` of shape :math:`(\text{batch_size}, 2, \text{max_level} + 2)`. Contains layout information for each octree ``pyramids[:, 0]`` represent the number of points in each level of the ``octrees``, ``pyramids[:, 1]`` represent the starting index of each level of the octree. - -.. _spc_exsum: - -``exsum:`` ----------- - -:class:`torch.IntTensor` of shape :math:`(\text{octrees_num_bytes} + \text{batch_size})` is the exclusive sum of the bit counts of each ``octrees`` byte. - -.. note:: - To generate ``pyramids`` and ``exsum`` see :func:`kaolin.ops.spc.scan_octrees` - -.. _spc_points: - -``point_hierarchies:`` ----------------------- - -:class:`torch.ShortTensor` of shape :math:`(\text{num_nodes}, 3)` correspond to the sparse coordinates at all levels. We refer to this :ref:`packed` tensor as the **structured point hierarchies**. - -The image below show an analogous 2D example. - -.. image:: ../img/spc_points.png - :width: 400 - -the corresponding ``point_hierarchies`` would be: - ->>> torch.ShortTensor([[0, 0], [1, 1], - [1, 0], [2, 2], - [2, 1], [3, 1], [5, 5] - ]) - - -.. note:: - To generate ``points`` see :func:`kaolin.ops.generate_points` - -.. note:: - the tensors ``pyramid``, ``exsum`` and ``points`` are used by many Structured Point Cloud functions; avoiding their recomputation will improve performace. - - -Convolutions -============ - -We provide several sparse convolution layers for structured point clouds. -Convolutions are characterized by the size of the input and output channels, -an array of ``kernel_vectors``, and possibly the number of levels to ``jump``, i.e., -the difference in input and output levels. - -.. _kernel-text: - -An example of how to create a :math:`3 \times 3 \times 3` kernel follows: - ->>> vectors = [] ->>> for i in range(-1, 2): ->>> for j in range(-1, 2): ->>> for k in range(-1, 2): ->>> vectors.append([i, j, k]) ->>> Kvec = torch.tensor(vectors, dtype=torch.short, device=device) ->>> Kvec -tensor([[-1, -1, -1], - [-1, -1, 0], - [-1, -1, 1], - ... - ... - [ 1, 1, -1], - [ 1, 1, 0], - [ 1, 1, 1]], device='cuda:0', dtype=torch.int16) - -.. _neighborhood-text: - -The kernel vectors determine the shape of the convolution kernel. -Each kernel vector is added to the position of a point to determine -the coordinates of points whose corresponding input data is needed for the operation. -We formalize this notion using the following neighbor function: - -.. math:: - - n(i,k) = \text{ID}\left(P_i+\overrightarrow{K}_k\right) - -that returns the index of the point within the same level found by adding -kernel vector :math:`\overrightarrow{K}_k` to point :math:`P_i`. -Given the sparse nature of SPC data, it may be the case that no such point exists. In such cases, :math:`n(i,k)` -will return an invalid value, and data accesses will be treated like zero padding. - -Transposed convolutions are defined by the transposed neighbor function - -.. math:: - - n^T(i,k) = \text{ID}\left(P_i-\overrightarrow{K}_k\right) - - -The value **jump** is used to indicate the difference in levels between the iput features -and the output features. For convolutions, this is the number of levels to downsample; while -for transposed convolutions, **jump** is the number of levels to upsample. The value of **jump** must -be positive, and may not go beyond the highest level of the octree. - -Examples --------- - -You can create octrees from sparse feature_grids -(of shape :math:`(\text{batch_size}, \text{feature_dim}, \text{height}, \text{width}, \text{depth})`): - ->>> octrees, lengths, features = kaolin.ops.spc.feature_grids_to_spc(features_grids) - -or from point cloud (of shape :math:`(\text{num_points, 3})`): - ->>> qpc = kaolin.ops.spc.quantize_points(pc, level) ->>> octree = kaolin.ops.spc.unbatched_points_to_octree(qpc, level) - -To use convolution, you can use the functional or the torch.nn.Module version like torch.nn.functional.conv3d and torch.nn.Conv3d: - ->>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees(octrees, lengths) ->>> point_hierarchies = kaolin.ops.spc.generate_points(octrees, pyramids, exsum) ->>> kernel_vectors = torch.tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], - [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]], - dtype=torch.ShortTensor, device='cuda') ->>> conv = kaolin.ops.spc.Conv3d(in_channels, out_channels, kernel_vectors, jump=1, bias=True).cuda() ->>> # With functional ->>> out_features, out_level = kaolin.ops.spc.conv3d(octrees, point_hierarchies, level, pyramids, -... exsum, coalescent_features, weight, -... kernel_vectors, jump, bias) ->>> # With nn.Module and container class ->>> input_spc = kaolin.rep.Spc(octrees, lengths) ->>> conv ->>> out_features, out_level = kaolin.ops.spc.conv_transpose3d( -... **input_spc.to_dict(), input=out_features, level=level, -... weight=weight, kernel_vectors=kernel_vectors, jump=jump, bias=bias) - -To apply ray tracing we currently only support non-batched version, for instance here with RGB values as per point features: - ->>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees( -... octree, torch.tensor([len(octree)], dtype=torch.int32, device='cuda') ->>> point_hierarchy = kaolin.ops.spc.generate_points(octrees, pyramids, exsum) ->>> ridx, pidx, depth = kaolin.render.spc.unbatched_raytrace(octree, point_hierarchy, pyramids[0], exsum, -... origin, direction, max_level) ->>> first_hits_mask = kaolin.render.spc.mark_pack_boundaries(ridx) ->>> first_hits_point = pidx[first_hits_mask] ->>> first_hits_rgb = rgb[first_hits_point - pyramids[max_level - 2]] - - API --- diff --git a/docs/notes/spc_summary.rst b/docs/notes/spc_summary.rst index 2edc6f308..6f3cd3cd2 100644 --- a/docs/notes/spc_summary.rst +++ b/docs/notes/spc_summary.rst @@ -1,17 +1,271 @@ Structured Point Clouds (SPCs) -============================== +****************************** -Structured Point Cloud is a versatile octree data structure useful for a wide range of tasks. +.. _spc: + +Structured Point Clouds (SPC) is a sparse octree-based representation that is useful to organize and +compress 3D geometrically sparse information. +They are also known as sparse voxelgrids, quantized point clouds, and voxelized point clouds. .. image:: ../img/mesh_to_spc.png -Understanding SPCs Tutorial: +Kaolin supports a number of operations to work with SPCs, +including efficient ray-tracing and convolutions. + +The SPC data structure is very general. In the SPC data structure, octrees provide a way to store and efficiently retrieve coordinates of points at different levels of the octree hierarchy. It is also possible to associate features to these coordinates using point ordering in memory. Below we detail the low-level representations that comprise SPCs and allow corresponding efficient operations. We also provide a :ref:`convenience container` for these low-level attributes. + +Some of the conventions are also defined in `Neural Geometric Level of Detail: Real-time Rendering with +Implicit 3D Surfaces `_ which uses SPC as an internal representation. + +.. warning:: + Structured Point Clouds internal layout and structure is still experimental and may be modified in the future. + +Octree +====== + +.. _spc_octree: + +Core to SPC is the `octree `_, a tree data +structure where each node have up to 8 childrens. +We use this structure to do a recursive three-dimensional space partitioning, +i.e: each node represents a partitioning of its 3D space (partition) of :math:`(2, 2, 2)`. +The octree then contains the information necessary to find the sparse coordinates. + +In SPC, a batch of octrees is represented as a tensor of bytes. Each bit in the byte array ``octrees`` represents +the binary occupancy of an octree bit sorted in `Morton Order `_. +The Morton order is a type of space-filling curve which gives a deterministic ordering of +integer coordinates on a 3D grid. That is, for a given non-negative 1D integer coordinate, there exists a +bijective mapping to 3D integer coordinates. + +Since a byte is a collection of 8 bits, a single byte ``octrees[i]`` +represents an octree node where each bit indicate the binary occupancy of a child node / partition as +depicted below: + +.. image:: ../img/octants.png + :width: 600 + +For each octree, the nodes / bytes are following breadth-first-search order (with Morton order for +childrens order), and the octree bytes are then :ref:`packed` to form ``octrees``. This ordering +allows efficient tree access without having to explicilty store indirection pointers. + +.. figure:: ../img/octree.png + :scale: 30 % + :alt: An octree 3D partitioning + + Credit: https://en.wikipedia.org/wiki/Octree + +The binary occupancy values in the bits of ``octrees`` implicitly encode position data due to the bijective +mapping from Morton codes to 3D integer coordinates. However, to provide users a more straight +forward interface to work with these octrees, SPC provides auxilary information such as +``points`` which is a :ref:`packed` tensor of 3D coordinates. Refer to the :ref:`spc_attributes` section +for more details. + +Currently SPCs are primarily used to represent 3D surfaces, +and so all the leaves are at the same ``level`` (depth). +This allow very efficient processing on GPU, with custom CUDA kernels, for ray-tracing and convolution. + +The structure contains finer details as you go deeper in to the tree. +Below are the Levels 0 through 8 of a SPC teapot model: + +.. image:: ../img/spcTeapot.png + +Additional Feature Data +======================= + +The nodes of the ``octrees`` can contain information beyond just the 3D coordinates of the nodes, +such as RGB color, normals, feature maps, or even differentiable activation maps processed by a +convolution. + +We follow a `Structure of Arrays `_ approach to store +additional data for maximum user extensibility. +Currently the features would be tensors of shape :math:`(\text{num_nodes}, \text{feature_dim})` +with ``num_nodes`` being the number of nodes at a specific ``level`` of the ``octrees``, +and ``feature_dim`` the dimension of the feature set (for instance 3 for RGB color). +Users can freely define their own feature data to be stored alongside SPC. + +Conversions +=========== + +Structured point clouds can be derived from multiple sources. + +We can construct ``octrees`` +from unstructured point cloud data, from sparse voxelgrids +or from the level set of an implicit function :math:`f(x, y, z)`. + +.. _spc_attributes: + +Related attributes +================== + +.. note:: + If you just wanna use the structured point clouds without having to go through the low level details, take a look at :ref:`the high level classes `. + +.. _spc_lengths: + +``lengths:`` +------------ + +Since ``octrees`` use :ref:`packed` batching, we need ``lengths`` a 1D tensor of size ``batch_size`` that contains the size of each individual octree. Note that ``lengths.sum()`` should equal the size of ``octrees``. You can use :func:`kaolin.ops.batch.list_to_packed` to pack octrees and generate ``lengths`` + +.. _spc_pyramids: + +``pyramids:`` +------------- + +:class:`torch.IntTensor` of shape :math:`(\text{batch_size}, 2, \text{max_level} + 2)`. Contains layout information for each octree ``pyramids[:, 0]`` represent the number of points in each level of the ``octrees``, ``pyramids[:, 1]`` represent the starting index of each level of the octree. + +.. _spc_exsum: + +``exsum:`` +---------- + +:class:`torch.IntTensor` of shape :math:`(\text{octrees_num_bytes} + \text{batch_size})` is the exclusive sum of the bit counts of each ``octrees`` byte. + +.. note:: + To generate ``pyramids`` and ``exsum`` see :func:`kaolin.ops.spc.scan_octrees` + +.. _spc_points: + +``point_hierarchies:`` +---------------------- + +:class:`torch.ShortTensor` of shape :math:`(\text{num_nodes}, 3)` correspond to the sparse coordinates at all levels. We refer to this :ref:`packed` tensor as the **structured point hierarchies**. + +The image below show an analogous 2D example. + +.. image:: ../img/spc_points.png + :width: 400 + +the corresponding ``point_hierarchies`` would be: + +>>> torch.ShortTensor([[0, 0], [1, 1], + [1, 0], [2, 2], + [2, 1], [3, 1], [5, 5] + ]) + + +.. note:: + To generate ``points`` see :func:`kaolin.ops.generate_points` + +.. note:: + the tensors ``pyramid``, ``exsum`` and ``points`` are used by many Structured Point Cloud functions; avoiding their recomputation will improve performace. + + +Convolutions +============ + +We provide several sparse convolution layers for structured point clouds. +Convolutions are characterized by the size of the input and output channels, +an array of ``kernel_vectors``, and possibly the number of levels to ``jump``, i.e., +the difference in input and output levels. + +.. _kernel-text: + +An example of how to create a :math:`3 \times 3 \times 3` kernel follows: + +>>> vectors = [] +>>> for i in range(-1, 2): +>>> for j in range(-1, 2): +>>> for k in range(-1, 2): +>>> vectors.append([i, j, k]) +>>> Kvec = torch.tensor(vectors, dtype=torch.short, device=device) +>>> Kvec +tensor([[-1, -1, -1], + [-1, -1, 0], + [-1, -1, 1], + ... + ... + [ 1, 1, -1], + [ 1, 1, 0], + [ 1, 1, 1]], device='cuda:0', dtype=torch.int16) + +.. _neighborhood-text: + +The kernel vectors determine the shape of the convolution kernel. +Each kernel vector is added to the position of a point to determine +the coordinates of points whose corresponding input data is needed for the operation. +We formalize this notion using the following neighbor function: + +.. math:: + + n(i,k) = \text{ID}\left(P_i+\overrightarrow{K}_k\right) + +that returns the index of the point within the same level found by adding +kernel vector :math:`\overrightarrow{K}_k` to point :math:`P_i`. +Given the sparse nature of SPC data, it may be the case that no such point exists. In such cases, :math:`n(i,k)` +will return an invalid value, and data accesses will be treated like zero padding. + +Transposed convolutions are defined by the transposed neighbor function + +.. math:: + + n^T(i,k) = \text{ID}\left(P_i-\overrightarrow{K}_k\right) + + +The value **jump** is used to indicate the difference in levels between the iput features +and the output features. For convolutions, this is the number of levels to downsample; while +for transposed convolutions, **jump** is the number of levels to upsample. The value of **jump** must +be positive, and may not go beyond the highest level of the octree. + +Examples +-------- + +You can create octrees from sparse feature_grids +(of shape :math:`(\text{batch_size}, \text{feature_dim}, \text{height}, \text{width}, \text{depth})`): + +>>> octrees, lengths, features = kaolin.ops.spc.feature_grids_to_spc(features_grids) + +or from point cloud (of shape :math:`(\text{num_points, 3})`): + +>>> qpc = kaolin.ops.spc.quantize_points(pc, level) +>>> octree = kaolin.ops.spc.unbatched_points_to_octree(qpc, level) + +To use convolution, you can use the functional or the torch.nn.Module version like torch.nn.functional.conv3d and torch.nn.Conv3d: + +>>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees(octrees, lengths) +>>> point_hierarchies = kaolin.ops.spc.generate_points(octrees, pyramids, exsum) +>>> kernel_vectors = torch.tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], + [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]], + dtype=torch.ShortTensor, device='cuda') +>>> conv = kaolin.ops.spc.Conv3d(in_channels, out_channels, kernel_vectors, jump=1, bias=True).cuda() +>>> # With functional +>>> out_features, out_level = kaolin.ops.spc.conv3d(octrees, point_hierarchies, level, pyramids, +... exsum, coalescent_features, weight, +... kernel_vectors, jump, bias) +>>> # With nn.Module and container class +>>> input_spc = kaolin.rep.Spc(octrees, lengths) +>>> conv +>>> out_features, out_level = kaolin.ops.spc.conv_transpose3d( +... **input_spc.to_dict(), input=out_features, level=level, +... weight=weight, kernel_vectors=kernel_vectors, jump=jump, bias=bias) + +To apply ray tracing we currently only support non-batched version, for instance here with RGB values as per point features: + +>>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees( +... octree, torch.tensor([len(octree)], dtype=torch.int32, device='cuda') +>>> point_hierarchy = kaolin.ops.spc.generate_points(octrees, pyramids, exsum) +>>> ridx, pidx, depth = kaolin.render.spc.unbatched_raytrace(octree, point_hierarchy, pyramids[0], exsum, +... origin, direction, max_level) +>>> first_hits_mask = kaolin.render.spc.mark_pack_boundaries(ridx) +>>> first_hits_point = pidx[first_hits_mask] +>>> first_hits_rgb = rgb[first_hits_point - pyramids[max_level - 2]] + +Going further with SPC: +======================= + +Examples: ---------------------------- -See our Jupyter notebook for an walk-through of SPC features: +See our Jupyter notebook for a walk-through of SPC features: `examples/tutorial/understanding_spcs_tutorial.ipynb `_ +And also our recipes for simple examples of how to use SPC: + + * `spc_basics.py `_: showing attributes of an SPC object + * `spc_dual_octree.py `_: computing and explaining the dual of an SPC octree + * `spc_trilinear_interp.py `_: computing trilinear interpolation of a point cloud on an SPC + SPC Documentation: ------------------ @@ -20,4 +274,6 @@ Functions useful for working with SPCs are available in the following modules: * :ref:`kaolin.ops.spc` - general explanation and operations * :ref:`kaolin.render.spc` - rendering utilities -* :class:`kaolin.rep.Spc` - high-level wrapper \ No newline at end of file +* :class:`kaolin.rep.Spc` - high-level wrapper.. _kaolin.ops.spc: + + diff --git a/docs/notes/tutorial_index.rst b/docs/notes/tutorial_index.rst index de4caa0da..2dafc2e7e 100644 --- a/docs/notes/tutorial_index.rst +++ b/docs/notes/tutorial_index.rst @@ -51,6 +51,12 @@ Detailed Tutorials * applying marching tetrahedra * using Timelapse API for 3D checkpoints * visualizing 3D checkpoints using ``kaolin-dash3d`` +* `Camera and Rasterization `_: Rasterize ShapeNet mesh with nvdiffrast and camera: + * Load ShapeNet mesh + * Preprocess mesh and materials + * Create a camera with ``from_args()`` general constructor + * Render a mesh with multiple materials with nvdiffrast + * Move camera and see the resulting rendering Simple Recipes -------------- diff --git a/examples/tutorial/assets/ndc_camera_space.png b/examples/tutorial/assets/ndc_camera_space.png new file mode 100644 index 000000000..ee8838155 Binary files /dev/null and b/examples/tutorial/assets/ndc_camera_space.png differ diff --git a/examples/tutorial/assets/ndc_image_space.png b/examples/tutorial/assets/ndc_image_space.png new file mode 100644 index 000000000..3a1096390 Binary files /dev/null and b/examples/tutorial/assets/ndc_image_space.png differ diff --git a/examples/tutorial/camera_and_rasterization.ipynb b/examples/tutorial/camera_and_rasterization.ipynb new file mode 100644 index 000000000..b352b5ec4 --- /dev/null +++ b/examples/tutorial/camera_and_rasterization.ipynb @@ -0,0 +1,1259 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "69114969", + "metadata": {}, + "outputs": [], + "source": [ + "import glob\n", + "import math\n", + "\n", + "import torch\n", + "\n", + "import kaolin as kal\n", + "import nvdiffrast\n", + "from matplotlib import pyplot as plt\n", + "glctx = nvdiffrast.torch.RasterizeGLContext(False, device='cuda')" + ] + }, + { + "cell_type": "markdown", + "id": "dd8f9d9e", + "metadata": {}, + "source": [ + "## Load Mesh information" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "4963d2ce", + "metadata": {}, + "outputs": [], + "source": [ + "ds = kal.io.shapenet.ShapeNetV2(root='/data/ShapeNetCore.v2/', categories=['car'],\n", + " train=True, split=1., with_materials=True)\n", + "\n", + "# Normalize the data between [-0.5, 0.5]\n", + "mesh = ds[0].data\n", + "vertices = mesh.vertices.unsqueeze(0).cuda()\n", + "vertices_min = vertices.min(dim=1, keepdims=True)[0]\n", + "vertices_max = vertices.max(dim=1, keepdims=True)[0]\n", + "vertices -= (vertices_max + vertices_min) / 2.\n", + "vertices /= (vertices_max - vertices_min).max()\n", + "faces = mesh.faces.cuda()\n", + "\n", + "# Here we are preprocessing the materials, assigning faces to materials and\n", + "# using single diffuse color as backup when map doesn't exist (and face_uvs_idx == -1)\n", + "uvs = torch.nn.functional.pad(mesh.uvs.unsqueeze(0).cuda(), (0, 0, 0, 1))\n", + "face_uvs_idx = mesh.face_uvs_idx.cuda()\n", + "materials_order = mesh.materials_order\n", + "materials = [m['map_Kd'].permute(2, 0, 1).unsqueeze(0).cuda().float() / 255. if 'map_Kd' in m else\n", + " m['Kd'].reshape(1, 3, 1, 1).cuda()\n", + " for m in mesh.materials]\n", + "\n", + "nb_faces = faces.shape[0]\n", + "\n", + "num_consecutive_materials = \\\n", + " torch.cat([\n", + " materials_order[1:, 1],\n", + " torch.LongTensor([nb_faces])\n", + " ], dim=0)- materials_order[:, 1]\n", + "\n", + "face_material_idx = kal.ops.batch.tile_to_packed(\n", + " materials_order[:, 0],\n", + " num_consecutive_materials\n", + ").cuda().squeeze(-1)\n", + "mask = face_uvs_idx == -1\n", + "face_uvs_idx[mask] = 0\n", + "face_uvs = kal.ops.mesh.index_vertices_by_faces(\n", + " uvs, face_uvs_idx\n", + ")\n", + "face_uvs[:, mask] = 0." + ] + }, + { + "cell_type": "markdown", + "id": "03e898f4", + "metadata": {}, + "source": [ + "## Instantiate a camera\n", + "\n", + "With the general constructor `Camera.from_args()` the underlying constructors are `CameraExtrinsics.from_lookat()` and `PinholeIntrinsics.from_fov`" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "c6eee7ab", + "metadata": {}, + "outputs": [], + "source": [ + "cam = kal.render.camera.Camera.from_args(eye=torch.tensor([2., 0., 0.]),\n", + " at=torch.tensor([0., 0., 0.]),\n", + " up=torch.tensor([0., 1., 0.]),\n", + " fov=math.pi * 45 / 180,\n", + " width=512, height=512, device='cuda')" + ] + }, + { + "cell_type": "markdown", + "id": "4fff8eb1", + "metadata": {}, + "source": [ + "## Rendering a mesh\n", + "\n", + "Here we are rendering the loaded mesh with [nvdiffrast](https://github.com/NVlabs/nvdiffrast) using the camera object created above" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5e4b8a49", + "metadata": {}, + "outputs": [], + "source": [ + "def render():\n", + " transformed_vertices = cam.transform(vertices)\n", + " # Create a fake W (See nvdiffrast documentation)\n", + " pos = torch.nn.functional.pad(\n", + " transformed_vertices, (0, 1), mode='constant', value=1.\n", + " ).contiguous()\n", + " rast = nvdiffrast.torch.rasterize(glctx, pos, faces.int(), (512, 512), grad_db=False)\n", + " hard_mask = rast[0][:, :, :, -1:] != 0\n", + " face_idx = (rast[0][..., -1].long() - 1).contiguous()\n", + "\n", + " uv_map = nvdiffrast.torch.interpolate(uvs, rast[0], face_uvs_idx.int())[0]\n", + " uv_map = torch.clamp(uv_map, 0., 1.)\n", + "\n", + " img = torch.zeros((1, 512, 512, 3), dtype=torch.float, device='cuda')\n", + "\n", + " # Obj meshes can be composed of multiple materials\n", + " # so at rendering we need to interpolate from corresponding materials\n", + " im_material_idx = face_material_idx[face_idx]\n", + " im_material_idx[face_idx == -1] = -1\n", + "\n", + " for i, material in enumerate(materials):\n", + " mask = im_material_idx == i\n", + " mask_idx = torch.nonzero(mask, as_tuple=False)\n", + " _texcoords = uv_map[mask] * 2. - 1.\n", + " _texcoords[:, 1] = -_texcoords[:, 1]\n", + " pixel_val = torch.nn.functional.grid_sample(\n", + " materials[i], _texcoords.reshape(1, 1, -1, 2),\n", + " mode='bilinear', align_corners=False,\n", + " padding_mode='border')\n", + " img[mask] = pixel_val[0, :, 0].permute(1, 0)\n", + " \n", + " # Need to flip the image becasue opengl\n", + " return torch.flip(torch.clamp(img * hard_mask, 0., 1.)[0], dims=(0,))" + ] + }, + { + "cell_type": "markdown", + "id": "4978003c", + "metadata": {}, + "source": [ + "# Moving the camera\n", + "\n", + "Once the camera is created you can move it using `cam.move_up()`, `cam.move_right` and `cam.move_forward()`.\n", + "\n", + "To be noted that in OpenGL `forward` in the camera space is actually toward the viewer (so it actually move away from the object looked at)\n", + "\n", + "\"Markdown\n", + "\"Markdown\n", + "\n", + "Below is a simple interactive rendering, where buttons are linked to camera methods for moving it." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "83dd84cf", + "metadata": {}, + "outputs": [ + { + "data": { + "application/javascript": [ + "/* Put everything inside the global mpl namespace */\n", + "/* global mpl */\n", + "window.mpl = {};\n", + "\n", + "mpl.get_websocket_type = function () {\n", + " if (typeof WebSocket !== 'undefined') {\n", + " return WebSocket;\n", + " } else if (typeof MozWebSocket !== 'undefined') {\n", + " return MozWebSocket;\n", + " } else {\n", + " alert(\n", + " 'Your browser does not have WebSocket support. ' +\n", + " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", + " 'Firefox 4 and 5 are also supported but you ' +\n", + " 'have to enable WebSockets in about:config.'\n", + " );\n", + " }\n", + "};\n", + "\n", + "mpl.figure = function (figure_id, websocket, ondownload, parent_element) {\n", + " this.id = figure_id;\n", + "\n", + " this.ws = websocket;\n", + "\n", + " this.supports_binary = this.ws.binaryType !== undefined;\n", + "\n", + " if (!this.supports_binary) {\n", + " var warnings = document.getElementById('mpl-warnings');\n", + " if (warnings) {\n", + " warnings.style.display = 'block';\n", + " warnings.textContent =\n", + " 'This browser does not support binary websocket messages. ' +\n", + " 'Performance may be slow.';\n", + " }\n", + " }\n", + "\n", + " this.imageObj = new Image();\n", + "\n", + " this.context = undefined;\n", + " this.message = undefined;\n", + " this.canvas = undefined;\n", + " this.rubberband_canvas = undefined;\n", + " this.rubberband_context = undefined;\n", + " this.format_dropdown = undefined;\n", + "\n", + " this.image_mode = 'full';\n", + "\n", + " this.root = document.createElement('div');\n", + " this.root.setAttribute('style', 'display: inline-block');\n", + " this._root_extra_style(this.root);\n", + "\n", + " parent_element.appendChild(this.root);\n", + "\n", + " this._init_header(this);\n", + " this._init_canvas(this);\n", + " this._init_toolbar(this);\n", + "\n", + " var fig = this;\n", + "\n", + " this.waiting = false;\n", + "\n", + " this.ws.onopen = function () {\n", + " fig.send_message('supports_binary', { value: fig.supports_binary });\n", + " fig.send_message('send_image_mode', {});\n", + " if (fig.ratio !== 1) {\n", + " fig.send_message('set_device_pixel_ratio', {\n", + " device_pixel_ratio: fig.ratio,\n", + " });\n", + " }\n", + " fig.send_message('refresh', {});\n", + " };\n", + "\n", + " this.imageObj.onload = function () {\n", + " if (fig.image_mode === 'full') {\n", + " // Full images could contain transparency (where diff images\n", + " // almost always do), so we need to clear the canvas so that\n", + " // there is no ghosting.\n", + " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", + " }\n", + " fig.context.drawImage(fig.imageObj, 0, 0);\n", + " };\n", + "\n", + " this.imageObj.onunload = function () {\n", + " fig.ws.close();\n", + " };\n", + "\n", + " this.ws.onmessage = this._make_on_message_function(this);\n", + "\n", + " this.ondownload = ondownload;\n", + "};\n", + "\n", + "mpl.figure.prototype._init_header = function () {\n", + " var titlebar = document.createElement('div');\n", + " titlebar.classList =\n", + " 'ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix';\n", + " var titletext = document.createElement('div');\n", + " titletext.classList = 'ui-dialog-title';\n", + " titletext.setAttribute(\n", + " 'style',\n", + " 'width: 100%; text-align: center; padding: 3px;'\n", + " );\n", + " titlebar.appendChild(titletext);\n", + " this.root.appendChild(titlebar);\n", + " this.header = titletext;\n", + "};\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function (_canvas_div) {};\n", + "\n", + "mpl.figure.prototype._root_extra_style = function (_canvas_div) {};\n", + "\n", + "mpl.figure.prototype._init_canvas = function () {\n", + " var fig = this;\n", + "\n", + " var canvas_div = (this.canvas_div = document.createElement('div'));\n", + " canvas_div.setAttribute(\n", + " 'style',\n", + " 'border: 1px solid #ddd;' +\n", + " 'box-sizing: content-box;' +\n", + " 'clear: both;' +\n", + " 'min-height: 1px;' +\n", + " 'min-width: 1px;' +\n", + " 'outline: 0;' +\n", + " 'overflow: hidden;' +\n", + " 'position: relative;' +\n", + " 'resize: both;'\n", + " );\n", + "\n", + " function on_keyboard_event_closure(name) {\n", + " return function (event) {\n", + " return fig.key_event(event, name);\n", + " };\n", + " }\n", + "\n", + " canvas_div.addEventListener(\n", + " 'keydown',\n", + " on_keyboard_event_closure('key_press')\n", + " );\n", + " canvas_div.addEventListener(\n", + " 'keyup',\n", + " on_keyboard_event_closure('key_release')\n", + " );\n", + "\n", + " this._canvas_extra_style(canvas_div);\n", + " this.root.appendChild(canvas_div);\n", + "\n", + " var canvas = (this.canvas = document.createElement('canvas'));\n", + " canvas.classList.add('mpl-canvas');\n", + " canvas.setAttribute('style', 'box-sizing: content-box;');\n", + "\n", + " this.context = canvas.getContext('2d');\n", + "\n", + " var backingStore =\n", + " this.context.backingStorePixelRatio ||\n", + " this.context.webkitBackingStorePixelRatio ||\n", + " this.context.mozBackingStorePixelRatio ||\n", + " this.context.msBackingStorePixelRatio ||\n", + " this.context.oBackingStorePixelRatio ||\n", + " this.context.backingStorePixelRatio ||\n", + " 1;\n", + "\n", + " this.ratio = (window.devicePixelRatio || 1) / backingStore;\n", + "\n", + " var rubberband_canvas = (this.rubberband_canvas = document.createElement(\n", + " 'canvas'\n", + " ));\n", + " rubberband_canvas.setAttribute(\n", + " 'style',\n", + " 'box-sizing: content-box; position: absolute; left: 0; top: 0; z-index: 1;'\n", + " );\n", + "\n", + " // Apply a ponyfill if ResizeObserver is not implemented by browser.\n", + " if (this.ResizeObserver === undefined) {\n", + " if (window.ResizeObserver !== undefined) {\n", + " this.ResizeObserver = window.ResizeObserver;\n", + " } else {\n", + " var obs = _JSXTOOLS_RESIZE_OBSERVER({});\n", + " this.ResizeObserver = obs.ResizeObserver;\n", + " }\n", + " }\n", + "\n", + " this.resizeObserverInstance = new this.ResizeObserver(function (entries) {\n", + " var nentries = entries.length;\n", + " for (var i = 0; i < nentries; i++) {\n", + " var entry = entries[i];\n", + " var width, height;\n", + " if (entry.contentBoxSize) {\n", + " if (entry.contentBoxSize instanceof Array) {\n", + " // Chrome 84 implements new version of spec.\n", + " width = entry.contentBoxSize[0].inlineSize;\n", + " height = entry.contentBoxSize[0].blockSize;\n", + " } else {\n", + " // Firefox implements old version of spec.\n", + " width = entry.contentBoxSize.inlineSize;\n", + " height = entry.contentBoxSize.blockSize;\n", + " }\n", + " } else {\n", + " // Chrome <84 implements even older version of spec.\n", + " width = entry.contentRect.width;\n", + " height = entry.contentRect.height;\n", + " }\n", + "\n", + " // Keep the size of the canvas and rubber band canvas in sync with\n", + " // the canvas container.\n", + " if (entry.devicePixelContentBoxSize) {\n", + " // Chrome 84 implements new version of spec.\n", + " canvas.setAttribute(\n", + " 'width',\n", + " entry.devicePixelContentBoxSize[0].inlineSize\n", + " );\n", + " canvas.setAttribute(\n", + " 'height',\n", + " entry.devicePixelContentBoxSize[0].blockSize\n", + " );\n", + " } else {\n", + " canvas.setAttribute('width', width * fig.ratio);\n", + " canvas.setAttribute('height', height * fig.ratio);\n", + " }\n", + " canvas.setAttribute(\n", + " 'style',\n", + " 'width: ' + width + 'px; height: ' + height + 'px;'\n", + " );\n", + "\n", + " rubberband_canvas.setAttribute('width', width);\n", + " rubberband_canvas.setAttribute('height', height);\n", + "\n", + " // And update the size in Python. We ignore the initial 0/0 size\n", + " // that occurs as the element is placed into the DOM, which should\n", + " // otherwise not happen due to the minimum size styling.\n", + " if (fig.ws.readyState == 1 && width != 0 && height != 0) {\n", + " fig.request_resize(width, height);\n", + " }\n", + " }\n", + " });\n", + " this.resizeObserverInstance.observe(canvas_div);\n", + "\n", + " function on_mouse_event_closure(name) {\n", + " return function (event) {\n", + " return fig.mouse_event(event, name);\n", + " };\n", + " }\n", + "\n", + " rubberband_canvas.addEventListener(\n", + " 'mousedown',\n", + " on_mouse_event_closure('button_press')\n", + " );\n", + " rubberband_canvas.addEventListener(\n", + " 'mouseup',\n", + " on_mouse_event_closure('button_release')\n", + " );\n", + " rubberband_canvas.addEventListener(\n", + " 'dblclick',\n", + " on_mouse_event_closure('dblclick')\n", + " );\n", + " // Throttle sequential mouse events to 1 every 20ms.\n", + " rubberband_canvas.addEventListener(\n", + " 'mousemove',\n", + " on_mouse_event_closure('motion_notify')\n", + " );\n", + "\n", + " rubberband_canvas.addEventListener(\n", + " 'mouseenter',\n", + " on_mouse_event_closure('figure_enter')\n", + " );\n", + " rubberband_canvas.addEventListener(\n", + " 'mouseleave',\n", + " on_mouse_event_closure('figure_leave')\n", + " );\n", + "\n", + " canvas_div.addEventListener('wheel', function (event) {\n", + " if (event.deltaY < 0) {\n", + " event.step = 1;\n", + " } else {\n", + " event.step = -1;\n", + " }\n", + " on_mouse_event_closure('scroll')(event);\n", + " });\n", + "\n", + " canvas_div.appendChild(canvas);\n", + " canvas_div.appendChild(rubberband_canvas);\n", + "\n", + " this.rubberband_context = rubberband_canvas.getContext('2d');\n", + " this.rubberband_context.strokeStyle = '#000000';\n", + "\n", + " this._resize_canvas = function (width, height, forward) {\n", + " if (forward) {\n", + " canvas_div.style.width = width + 'px';\n", + " canvas_div.style.height = height + 'px';\n", + " }\n", + " };\n", + "\n", + " // Disable right mouse context menu.\n", + " this.rubberband_canvas.addEventListener('contextmenu', function (_e) {\n", + " event.preventDefault();\n", + " return false;\n", + " });\n", + "\n", + " function set_focus() {\n", + " canvas.focus();\n", + " canvas_div.focus();\n", + " }\n", + "\n", + " window.setTimeout(set_focus, 100);\n", + "};\n", + "\n", + "mpl.figure.prototype._init_toolbar = function () {\n", + " var fig = this;\n", + "\n", + " var toolbar = document.createElement('div');\n", + " toolbar.classList = 'mpl-toolbar';\n", + " this.root.appendChild(toolbar);\n", + "\n", + " function on_click_closure(name) {\n", + " return function (_event) {\n", + " return fig.toolbar_button_onclick(name);\n", + " };\n", + " }\n", + "\n", + " function on_mouseover_closure(tooltip) {\n", + " return function (event) {\n", + " if (!event.currentTarget.disabled) {\n", + " return fig.toolbar_button_onmouseover(tooltip);\n", + " }\n", + " };\n", + " }\n", + "\n", + " fig.buttons = {};\n", + " var buttonGroup = document.createElement('div');\n", + " buttonGroup.classList = 'mpl-button-group';\n", + " for (var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " /* Instead of a spacer, we start a new button group. */\n", + " if (buttonGroup.hasChildNodes()) {\n", + " toolbar.appendChild(buttonGroup);\n", + " }\n", + " buttonGroup = document.createElement('div');\n", + " buttonGroup.classList = 'mpl-button-group';\n", + " continue;\n", + " }\n", + "\n", + " var button = (fig.buttons[name] = document.createElement('button'));\n", + " button.classList = 'mpl-widget';\n", + " button.setAttribute('role', 'button');\n", + " button.setAttribute('aria-disabled', 'false');\n", + " button.addEventListener('click', on_click_closure(method_name));\n", + " button.addEventListener('mouseover', on_mouseover_closure(tooltip));\n", + "\n", + " var icon_img = document.createElement('img');\n", + " icon_img.src = '_images/' + image + '.png';\n", + " icon_img.srcset = '_images/' + image + '_large.png 2x';\n", + " icon_img.alt = tooltip;\n", + " button.appendChild(icon_img);\n", + "\n", + " buttonGroup.appendChild(button);\n", + " }\n", + "\n", + " if (buttonGroup.hasChildNodes()) {\n", + " toolbar.appendChild(buttonGroup);\n", + " }\n", + "\n", + " var fmt_picker = document.createElement('select');\n", + " fmt_picker.classList = 'mpl-widget';\n", + " toolbar.appendChild(fmt_picker);\n", + " this.format_dropdown = fmt_picker;\n", + "\n", + " for (var ind in mpl.extensions) {\n", + " var fmt = mpl.extensions[ind];\n", + " var option = document.createElement('option');\n", + " option.selected = fmt === mpl.default_extension;\n", + " option.innerHTML = fmt;\n", + " fmt_picker.appendChild(option);\n", + " }\n", + "\n", + " var status_bar = document.createElement('span');\n", + " status_bar.classList = 'mpl-message';\n", + " toolbar.appendChild(status_bar);\n", + " this.message = status_bar;\n", + "};\n", + "\n", + "mpl.figure.prototype.request_resize = function (x_pixels, y_pixels) {\n", + " // Request matplotlib to resize the figure. Matplotlib will then trigger a resize in the client,\n", + " // which will in turn request a refresh of the image.\n", + " this.send_message('resize', { width: x_pixels, height: y_pixels });\n", + "};\n", + "\n", + "mpl.figure.prototype.send_message = function (type, properties) {\n", + " properties['type'] = type;\n", + " properties['figure_id'] = this.id;\n", + " this.ws.send(JSON.stringify(properties));\n", + "};\n", + "\n", + "mpl.figure.prototype.send_draw_message = function () {\n", + " if (!this.waiting) {\n", + " this.waiting = true;\n", + " this.ws.send(JSON.stringify({ type: 'draw', figure_id: this.id }));\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_save = function (fig, _msg) {\n", + " var format_dropdown = fig.format_dropdown;\n", + " var format = format_dropdown.options[format_dropdown.selectedIndex].value;\n", + " fig.ondownload(fig, format);\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_resize = function (fig, msg) {\n", + " var size = msg['size'];\n", + " if (size[0] !== fig.canvas.width || size[1] !== fig.canvas.height) {\n", + " fig._resize_canvas(size[0], size[1], msg['forward']);\n", + " fig.send_message('refresh', {});\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_rubberband = function (fig, msg) {\n", + " var x0 = msg['x0'] / fig.ratio;\n", + " var y0 = (fig.canvas.height - msg['y0']) / fig.ratio;\n", + " var x1 = msg['x1'] / fig.ratio;\n", + " var y1 = (fig.canvas.height - msg['y1']) / fig.ratio;\n", + " x0 = Math.floor(x0) + 0.5;\n", + " y0 = Math.floor(y0) + 0.5;\n", + " x1 = Math.floor(x1) + 0.5;\n", + " y1 = Math.floor(y1) + 0.5;\n", + " var min_x = Math.min(x0, x1);\n", + " var min_y = Math.min(y0, y1);\n", + " var width = Math.abs(x1 - x0);\n", + " var height = Math.abs(y1 - y0);\n", + "\n", + " fig.rubberband_context.clearRect(\n", + " 0,\n", + " 0,\n", + " fig.canvas.width / fig.ratio,\n", + " fig.canvas.height / fig.ratio\n", + " );\n", + "\n", + " fig.rubberband_context.strokeRect(min_x, min_y, width, height);\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_figure_label = function (fig, msg) {\n", + " // Updates the figure title.\n", + " fig.header.textContent = msg['label'];\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_cursor = function (fig, msg) {\n", + " fig.rubberband_canvas.style.cursor = msg['cursor'];\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_message = function (fig, msg) {\n", + " fig.message.textContent = msg['message'];\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_draw = function (fig, _msg) {\n", + " // Request the server to send over a new figure.\n", + " fig.send_draw_message();\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_image_mode = function (fig, msg) {\n", + " fig.image_mode = msg['mode'];\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_history_buttons = function (fig, msg) {\n", + " for (var key in msg) {\n", + " if (!(key in fig.buttons)) {\n", + " continue;\n", + " }\n", + " fig.buttons[key].disabled = !msg[key];\n", + " fig.buttons[key].setAttribute('aria-disabled', !msg[key]);\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_navigate_mode = function (fig, msg) {\n", + " if (msg['mode'] === 'PAN') {\n", + " fig.buttons['Pan'].classList.add('active');\n", + " fig.buttons['Zoom'].classList.remove('active');\n", + " } else if (msg['mode'] === 'ZOOM') {\n", + " fig.buttons['Pan'].classList.remove('active');\n", + " fig.buttons['Zoom'].classList.add('active');\n", + " } else {\n", + " fig.buttons['Pan'].classList.remove('active');\n", + " fig.buttons['Zoom'].classList.remove('active');\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.updated_canvas_event = function () {\n", + " // Called whenever the canvas gets updated.\n", + " this.send_message('ack', {});\n", + "};\n", + "\n", + "// A function to construct a web socket function for onmessage handling.\n", + "// Called in the figure constructor.\n", + "mpl.figure.prototype._make_on_message_function = function (fig) {\n", + " return function socket_on_message(evt) {\n", + " if (evt.data instanceof Blob) {\n", + " var img = evt.data;\n", + " if (img.type !== 'image/png') {\n", + " /* FIXME: We get \"Resource interpreted as Image but\n", + " * transferred with MIME type text/plain:\" errors on\n", + " * Chrome. But how to set the MIME type? It doesn't seem\n", + " * to be part of the websocket stream */\n", + " img.type = 'image/png';\n", + " }\n", + "\n", + " /* Free the memory for the previous frames */\n", + " if (fig.imageObj.src) {\n", + " (window.URL || window.webkitURL).revokeObjectURL(\n", + " fig.imageObj.src\n", + " );\n", + " }\n", + "\n", + " fig.imageObj.src = (window.URL || window.webkitURL).createObjectURL(\n", + " img\n", + " );\n", + " fig.updated_canvas_event();\n", + " fig.waiting = false;\n", + " return;\n", + " } else if (\n", + " typeof evt.data === 'string' &&\n", + " evt.data.slice(0, 21) === 'data:image/png;base64'\n", + " ) {\n", + " fig.imageObj.src = evt.data;\n", + " fig.updated_canvas_event();\n", + " fig.waiting = false;\n", + " return;\n", + " }\n", + "\n", + " var msg = JSON.parse(evt.data);\n", + " var msg_type = msg['type'];\n", + "\n", + " // Call the \"handle_{type}\" callback, which takes\n", + " // the figure and JSON message as its only arguments.\n", + " try {\n", + " var callback = fig['handle_' + msg_type];\n", + " } catch (e) {\n", + " console.log(\n", + " \"No handler for the '\" + msg_type + \"' message type: \",\n", + " msg\n", + " );\n", + " return;\n", + " }\n", + "\n", + " if (callback) {\n", + " try {\n", + " // console.log(\"Handling '\" + msg_type + \"' message: \", msg);\n", + " callback(fig, msg);\n", + " } catch (e) {\n", + " console.log(\n", + " \"Exception inside the 'handler_\" + msg_type + \"' callback:\",\n", + " e,\n", + " e.stack,\n", + " msg\n", + " );\n", + " }\n", + " }\n", + " };\n", + "};\n", + "\n", + "// from https://stackoverflow.com/questions/1114465/getting-mouse-location-in-canvas\n", + "mpl.findpos = function (e) {\n", + " //this section is from http://www.quirksmode.org/js/events_properties.html\n", + " var targ;\n", + " if (!e) {\n", + " e = window.event;\n", + " }\n", + " if (e.target) {\n", + " targ = e.target;\n", + " } else if (e.srcElement) {\n", + " targ = e.srcElement;\n", + " }\n", + " if (targ.nodeType === 3) {\n", + " // defeat Safari bug\n", + " targ = targ.parentNode;\n", + " }\n", + "\n", + " // pageX,Y are the mouse positions relative to the document\n", + " var boundingRect = targ.getBoundingClientRect();\n", + " var x = e.pageX - (boundingRect.left + document.body.scrollLeft);\n", + " var y = e.pageY - (boundingRect.top + document.body.scrollTop);\n", + "\n", + " return { x: x, y: y };\n", + "};\n", + "\n", + "/*\n", + " * return a copy of an object with only non-object keys\n", + " * we need this to avoid circular references\n", + " * https://stackoverflow.com/a/24161582/3208463\n", + " */\n", + "function simpleKeys(original) {\n", + " return Object.keys(original).reduce(function (obj, key) {\n", + " if (typeof original[key] !== 'object') {\n", + " obj[key] = original[key];\n", + " }\n", + " return obj;\n", + " }, {});\n", + "}\n", + "\n", + "mpl.figure.prototype.mouse_event = function (event, name) {\n", + " var canvas_pos = mpl.findpos(event);\n", + "\n", + " if (name === 'button_press') {\n", + " this.canvas.focus();\n", + " this.canvas_div.focus();\n", + " }\n", + "\n", + " var x = canvas_pos.x * this.ratio;\n", + " var y = canvas_pos.y * this.ratio;\n", + "\n", + " this.send_message(name, {\n", + " x: x,\n", + " y: y,\n", + " button: event.button,\n", + " step: event.step,\n", + " guiEvent: simpleKeys(event),\n", + " });\n", + "\n", + " /* This prevents the web browser from automatically changing to\n", + " * the text insertion cursor when the button is pressed. We want\n", + " * to control all of the cursor setting manually through the\n", + " * 'cursor' event from matplotlib */\n", + " event.preventDefault();\n", + " return false;\n", + "};\n", + "\n", + "mpl.figure.prototype._key_event_extra = function (_event, _name) {\n", + " // Handle any extra behaviour associated with a key event\n", + "};\n", + "\n", + "mpl.figure.prototype.key_event = function (event, name) {\n", + " // Prevent repeat events\n", + " if (name === 'key_press') {\n", + " if (event.key === this._key) {\n", + " return;\n", + " } else {\n", + " this._key = event.key;\n", + " }\n", + " }\n", + " if (name === 'key_release') {\n", + " this._key = null;\n", + " }\n", + "\n", + " var value = '';\n", + " if (event.ctrlKey && event.key !== 'Control') {\n", + " value += 'ctrl+';\n", + " }\n", + " else if (event.altKey && event.key !== 'Alt') {\n", + " value += 'alt+';\n", + " }\n", + " else if (event.shiftKey && event.key !== 'Shift') {\n", + " value += 'shift+';\n", + " }\n", + "\n", + " value += 'k' + event.key;\n", + "\n", + " this._key_event_extra(event, name);\n", + "\n", + " this.send_message(name, { key: value, guiEvent: simpleKeys(event) });\n", + " return false;\n", + "};\n", + "\n", + "mpl.figure.prototype.toolbar_button_onclick = function (name) {\n", + " if (name === 'download') {\n", + " this.handle_save(this, null);\n", + " } else {\n", + " this.send_message('toolbar_button', { name: name });\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.toolbar_button_onmouseover = function (tooltip) {\n", + " this.message.textContent = tooltip;\n", + "};\n", + "\n", + "///////////////// REMAINING CONTENT GENERATED BY embed_js.py /////////////////\n", + "// prettier-ignore\n", + "var _JSXTOOLS_RESIZE_OBSERVER=function(A){var t,i=new WeakMap,n=new WeakMap,a=new WeakMap,r=new WeakMap,o=new Set;function s(e){if(!(this instanceof s))throw new TypeError(\"Constructor requires 'new' operator\");i.set(this,e)}function h(){throw new TypeError(\"Function is not a constructor\")}function c(e,t,i,n){e=0 in arguments?Number(arguments[0]):0,t=1 in arguments?Number(arguments[1]):0,i=2 in arguments?Number(arguments[2]):0,n=3 in arguments?Number(arguments[3]):0,this.right=(this.x=this.left=e)+(this.width=i),this.bottom=(this.y=this.top=t)+(this.height=n),Object.freeze(this)}function d(){t=requestAnimationFrame(d);var s=new WeakMap,p=new Set;o.forEach((function(t){r.get(t).forEach((function(i){var r=t instanceof window.SVGElement,o=a.get(t),d=r?0:parseFloat(o.paddingTop),f=r?0:parseFloat(o.paddingRight),l=r?0:parseFloat(o.paddingBottom),u=r?0:parseFloat(o.paddingLeft),g=r?0:parseFloat(o.borderTopWidth),m=r?0:parseFloat(o.borderRightWidth),w=r?0:parseFloat(o.borderBottomWidth),b=u+f,F=d+l,v=(r?0:parseFloat(o.borderLeftWidth))+m,W=g+w,y=r?0:t.offsetHeight-W-t.clientHeight,E=r?0:t.offsetWidth-v-t.clientWidth,R=b+v,z=F+W,M=r?t.width:parseFloat(o.width)-R-E,O=r?t.height:parseFloat(o.height)-z-y;if(n.has(t)){var k=n.get(t);if(k[0]===M&&k[1]===O)return}n.set(t,[M,O]);var S=Object.create(h.prototype);S.target=t,S.contentRect=new c(u,d,M,O),s.has(i)||(s.set(i,[]),p.add(i)),s.get(i).push(S)}))})),p.forEach((function(e){i.get(e).call(e,s.get(e),e)}))}return s.prototype.observe=function(i){if(i instanceof window.Element){r.has(i)||(r.set(i,new Set),o.add(i),a.set(i,window.getComputedStyle(i)));var n=r.get(i);n.has(this)||n.add(this),cancelAnimationFrame(t),t=requestAnimationFrame(d)}},s.prototype.unobserve=function(i){if(i instanceof window.Element&&r.has(i)){var n=r.get(i);n.has(this)&&(n.delete(this),n.size||(r.delete(i),o.delete(i))),n.size||r.delete(i),o.size||cancelAnimationFrame(t)}},A.DOMRectReadOnly=c,A.ResizeObserver=s,A.ResizeObserverEntry=h,A}; // eslint-disable-line\n", + "mpl.toolbar_items = [[\"Home\", \"Reset original view\", \"fa fa-home icon-home\", \"home\"], [\"Back\", \"Back to previous view\", \"fa fa-arrow-left icon-arrow-left\", \"back\"], [\"Forward\", \"Forward to next view\", \"fa fa-arrow-right icon-arrow-right\", \"forward\"], [\"\", \"\", \"\", \"\"], [\"Pan\", \"Left button pans, Right button zooms\\nx/y fixes axis, CTRL fixes aspect\", \"fa fa-arrows icon-move\", \"pan\"], [\"Zoom\", \"Zoom to rectangle\\nx/y fixes axis\", \"fa fa-square-o icon-check-empty\", \"zoom\"], [\"\", \"\", \"\", \"\"], [\"Download\", \"Download plot\", \"fa fa-floppy-o icon-save\", \"download\"]];\n", + "\n", + "mpl.extensions = [\"eps\", \"jpeg\", \"pgf\", \"pdf\", \"png\", \"ps\", \"raw\", \"svg\", \"tif\"];\n", + "\n", + "mpl.default_extension = \"png\";/* global mpl */\n", + "\n", + "var comm_websocket_adapter = function (comm) {\n", + " // Create a \"websocket\"-like object which calls the given IPython comm\n", + " // object with the appropriate methods. Currently this is a non binary\n", + " // socket, so there is still some room for performance tuning.\n", + " var ws = {};\n", + "\n", + " ws.binaryType = comm.kernel.ws.binaryType;\n", + " ws.readyState = comm.kernel.ws.readyState;\n", + " function updateReadyState(_event) {\n", + " if (comm.kernel.ws) {\n", + " ws.readyState = comm.kernel.ws.readyState;\n", + " } else {\n", + " ws.readyState = 3; // Closed state.\n", + " }\n", + " }\n", + " comm.kernel.ws.addEventListener('open', updateReadyState);\n", + " comm.kernel.ws.addEventListener('close', updateReadyState);\n", + " comm.kernel.ws.addEventListener('error', updateReadyState);\n", + "\n", + " ws.close = function () {\n", + " comm.close();\n", + " };\n", + " ws.send = function (m) {\n", + " //console.log('sending', m);\n", + " comm.send(m);\n", + " };\n", + " // Register the callback with on_msg.\n", + " comm.on_msg(function (msg) {\n", + " //console.log('receiving', msg['content']['data'], msg);\n", + " var data = msg['content']['data'];\n", + " if (data['blob'] !== undefined) {\n", + " data = {\n", + " data: new Blob(msg['buffers'], { type: data['blob'] }),\n", + " };\n", + " }\n", + " // Pass the mpl event to the overridden (by mpl) onmessage function.\n", + " ws.onmessage(data);\n", + " });\n", + " return ws;\n", + "};\n", + "\n", + "mpl.mpl_figure_comm = function (comm, msg) {\n", + " // This is the function which gets called when the mpl process\n", + " // starts-up an IPython Comm through the \"matplotlib\" channel.\n", + "\n", + " var id = msg.content.data.id;\n", + " // Get hold of the div created by the display call when the Comm\n", + " // socket was opened in Python.\n", + " var element = document.getElementById(id);\n", + " var ws_proxy = comm_websocket_adapter(comm);\n", + "\n", + " function ondownload(figure, _format) {\n", + " window.open(figure.canvas.toDataURL());\n", + " }\n", + "\n", + " var fig = new mpl.figure(id, ws_proxy, ondownload, element);\n", + "\n", + " // Call onopen now - mpl needs it, as it is assuming we've passed it a real\n", + " // web socket which is closed, not our websocket->open comm proxy.\n", + " ws_proxy.onopen();\n", + "\n", + " fig.parent_element = element;\n", + " fig.cell_info = mpl.find_output_cell(\"
\");\n", + " if (!fig.cell_info) {\n", + " console.error('Failed to find cell for figure', id, fig);\n", + " return;\n", + " }\n", + " fig.cell_info[0].output_area.element.on(\n", + " 'cleared',\n", + " { fig: fig },\n", + " fig._remove_fig_handler\n", + " );\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_close = function (fig, msg) {\n", + " var width = fig.canvas.width / fig.ratio;\n", + " fig.cell_info[0].output_area.element.off(\n", + " 'cleared',\n", + " fig._remove_fig_handler\n", + " );\n", + " fig.resizeObserverInstance.unobserve(fig.canvas_div);\n", + "\n", + " // Update the output cell to use the data from the current canvas.\n", + " fig.push_to_output();\n", + " var dataURL = fig.canvas.toDataURL();\n", + " // Re-enable the keyboard manager in IPython - without this line, in FF,\n", + " // the notebook keyboard shortcuts fail.\n", + " IPython.keyboard_manager.enable();\n", + " fig.parent_element.innerHTML =\n", + " '';\n", + " fig.close_ws(fig, msg);\n", + "};\n", + "\n", + "mpl.figure.prototype.close_ws = function (fig, msg) {\n", + " fig.send_message('closing', msg);\n", + " // fig.ws.close()\n", + "};\n", + "\n", + "mpl.figure.prototype.push_to_output = function (_remove_interactive) {\n", + " // Turn the data on the canvas into data in the output cell.\n", + " var width = this.canvas.width / this.ratio;\n", + " var dataURL = this.canvas.toDataURL();\n", + " this.cell_info[1]['text/html'] =\n", + " '';\n", + "};\n", + "\n", + "mpl.figure.prototype.updated_canvas_event = function () {\n", + " // Tell IPython that the notebook contents must change.\n", + " IPython.notebook.set_dirty(true);\n", + " this.send_message('ack', {});\n", + " var fig = this;\n", + " // Wait a second, then push the new image to the DOM so\n", + " // that it is saved nicely (might be nice to debounce this).\n", + " setTimeout(function () {\n", + " fig.push_to_output();\n", + " }, 1000);\n", + "};\n", + "\n", + "mpl.figure.prototype._init_toolbar = function () {\n", + " var fig = this;\n", + "\n", + " var toolbar = document.createElement('div');\n", + " toolbar.classList = 'btn-toolbar';\n", + " this.root.appendChild(toolbar);\n", + "\n", + " function on_click_closure(name) {\n", + " return function (_event) {\n", + " return fig.toolbar_button_onclick(name);\n", + " };\n", + " }\n", + "\n", + " function on_mouseover_closure(tooltip) {\n", + " return function (event) {\n", + " if (!event.currentTarget.disabled) {\n", + " return fig.toolbar_button_onmouseover(tooltip);\n", + " }\n", + " };\n", + " }\n", + "\n", + " fig.buttons = {};\n", + " var buttonGroup = document.createElement('div');\n", + " buttonGroup.classList = 'btn-group';\n", + " var button;\n", + " for (var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " /* Instead of a spacer, we start a new button group. */\n", + " if (buttonGroup.hasChildNodes()) {\n", + " toolbar.appendChild(buttonGroup);\n", + " }\n", + " buttonGroup = document.createElement('div');\n", + " buttonGroup.classList = 'btn-group';\n", + " continue;\n", + " }\n", + "\n", + " button = fig.buttons[name] = document.createElement('button');\n", + " button.classList = 'btn btn-default';\n", + " button.href = '#';\n", + " button.title = name;\n", + " button.innerHTML = '';\n", + " button.addEventListener('click', on_click_closure(method_name));\n", + " button.addEventListener('mouseover', on_mouseover_closure(tooltip));\n", + " buttonGroup.appendChild(button);\n", + " }\n", + "\n", + " if (buttonGroup.hasChildNodes()) {\n", + " toolbar.appendChild(buttonGroup);\n", + " }\n", + "\n", + " // Add the status bar.\n", + " var status_bar = document.createElement('span');\n", + " status_bar.classList = 'mpl-message pull-right';\n", + " toolbar.appendChild(status_bar);\n", + " this.message = status_bar;\n", + "\n", + " // Add the close button to the window.\n", + " var buttongrp = document.createElement('div');\n", + " buttongrp.classList = 'btn-group inline pull-right';\n", + " button = document.createElement('button');\n", + " button.classList = 'btn btn-mini btn-primary';\n", + " button.href = '#';\n", + " button.title = 'Stop Interaction';\n", + " button.innerHTML = '';\n", + " button.addEventListener('click', function (_evt) {\n", + " fig.handle_close(fig, {});\n", + " });\n", + " button.addEventListener(\n", + " 'mouseover',\n", + " on_mouseover_closure('Stop Interaction')\n", + " );\n", + " buttongrp.appendChild(button);\n", + " var titlebar = this.root.querySelector('.ui-dialog-titlebar');\n", + " titlebar.insertBefore(buttongrp, titlebar.firstChild);\n", + "};\n", + "\n", + "mpl.figure.prototype._remove_fig_handler = function (event) {\n", + " var fig = event.data.fig;\n", + " if (event.target !== this) {\n", + " // Ignore bubbled events from children.\n", + " return;\n", + " }\n", + " fig.close_ws(fig, {});\n", + "};\n", + "\n", + "mpl.figure.prototype._root_extra_style = function (el) {\n", + " el.style.boxSizing = 'content-box'; // override notebook setting of border-box.\n", + "};\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function (el) {\n", + " // this is important to make the div 'focusable\n", + " el.setAttribute('tabindex', 0);\n", + " // reach out to IPython and tell the keyboard manager to turn it's self\n", + " // off when our div gets focus\n", + "\n", + " // location in version 3\n", + " if (IPython.notebook.keyboard_manager) {\n", + " IPython.notebook.keyboard_manager.register_events(el);\n", + " } else {\n", + " // location in version 2\n", + " IPython.keyboard_manager.register_events(el);\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype._key_event_extra = function (event, _name) {\n", + " // Check for shift+enter\n", + " if (event.shiftKey && event.which === 13) {\n", + " this.canvas_div.blur();\n", + " // select the cell after this one\n", + " var index = IPython.notebook.find_cell_index(this.cell_info[0]);\n", + " IPython.notebook.select(index + 1);\n", + " }\n", + "};\n", + "\n", + "mpl.figure.prototype.handle_save = function (fig, _msg) {\n", + " fig.ondownload(fig, null);\n", + "};\n", + "\n", + "mpl.find_output_cell = function (html_output) {\n", + " // Return the cell and output element which can be found *uniquely* in the notebook.\n", + " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", + " // IPython event is triggered only after the cells have been serialised, which for\n", + " // our purposes (turning an active figure into a static one), is too late.\n", + " var cells = IPython.notebook.get_cells();\n", + " var ncells = cells.length;\n", + " for (var i = 0; i < ncells; i++) {\n", + " var cell = cells[i];\n", + " if (cell.cell_type === 'code') {\n", + " for (var j = 0; j < cell.output_area.outputs.length; j++) {\n", + " var data = cell.output_area.outputs[j];\n", + " if (data.data) {\n", + " // IPython >= 3 moved mimebundle to data attribute of output\n", + " data = data.data;\n", + " }\n", + " if (data['text/html'] === html_output) {\n", + " return [cell, data, j];\n", + " }\n", + " }\n", + " }\n", + " }\n", + "};\n", + "\n", + "// Register the function which deals with the matplotlib target/channel.\n", + "// The kernel may be null if the page has been refreshed.\n", + "if (IPython.notebook.kernel !== null) {\n", + " IPython.notebook.kernel.comm_manager.register_target(\n", + " 'matplotlib',\n", + " mpl.mpl_figure_comm\n", + " );\n", + "}\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "%matplotlib notebook\n", + "from matplotlib.widgets import Button\n", + "\n", + "fig, ax = plt.subplots()\n", + "plt.subplots_adjust(bottom=0.2)\n", + "im_buffer = plt.imshow(render().cpu())\n", + "\n", + "def update():\n", + " \"\"\"Update the image buffer\"\"\"\n", + " im_buffer.set_data(render().cpu())\n", + " plt.draw()\n", + "\n", + "def on_button_up_clicked(b):\n", + " \"\"\"Callback on Up\"\"\"\n", + " cam.move_up(0.1)\n", + " update()\n", + "\n", + "def on_button_down_clicked(b):\n", + " \"\"\"Callback on Down\"\"\"\n", + " cam.move_up(-0.1)\n", + " update()\n", + "\n", + "def on_button_left_clicked(b):\n", + " \"\"\"Callback on Left\"\"\"\n", + " cam.move_right(-0.1)\n", + " update()\n", + "\n", + "def on_button_right_clicked(b):\n", + " \"\"\"Callback on Right\"\"\"\n", + " cam.move_right(0.1)\n", + " update()\n", + "\n", + "def on_button_forward_clicked(b):\n", + " \"\"\"Callback on Forward\n", + " \n", + " Note: Forward is actually on the back of the camera\n", + " \"\"\"\n", + " cam.move_forward(0.1)\n", + " update()\n", + " \n", + "def on_button_backward_clicked(b):\n", + " \"\"\"Callback on Backward\n", + " \n", + " Note: Forward is actually on the back of the camera\n", + " \"\"\"\n", + " cam.move_forward(-0.1)\n", + " update()\n", + "\n", + "up_ax = plt.axes([0.0, 0.05, 0.13, 0.075])\n", + "left_ax = plt.axes([0.15, 0.05, 0.13, 0.075])\n", + "down_ax = plt.axes([0.3, 0.05, 0.13, 0.075])\n", + "right_ax = plt.axes([0.45, 0.05, 0.13, 0.075])\n", + "forward_ax = plt.axes([0.6, 0.05, 0.13, 0.075])\n", + "backward_ax = plt.axes([0.75, 0.05, 0.13, 0.075])\n", + "button_up = Button(up_ax, \"Up\")\n", + "button_down = Button(down_ax, \"Bottom\")\n", + "button_left = Button(left_ax, \"Left\")\n", + "button_right = Button(right_ax, \"Right\")\n", + "button_forward = Button(forward_ax, \"Forward\")\n", + "button_backward = Button(backward_ax, \"Backward\")\n", + "button_up.on_clicked(on_button_up_clicked)\n", + "button_down.on_clicked(on_button_down_clicked)\n", + "button_left.on_clicked(on_button_left_clicked)\n", + "button_right.on_clicked(on_button_right_clicked)\n", + "button_forward.on_clicked(on_button_forward_clicked)\n", + "button_backward.on_clicked(on_button_backward_clicked)\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/version.txt b/version.txt index d9df1bbc0..ac454c6a1 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -0.11.0 +0.12.0