diff --git a/README.md b/README.md
index 4132929..79c04a8 100644
--- a/README.md
+++ b/README.md
@@ -1,310 +1,68 @@
-# Isaac ROS DNN Stereo Disparity
+# Isaac ROS DNN Stereo Depth
-DNN Stereo Disparity includes packages for predicting disparity of stereo input.
+Hardware-accelerated, deep learned stereo disparity estimation
-

+
---
## Webinar Available
-Learn how to use this package by watching our on-demand webinar: [Using ML Models in ROS 2 to Robustly Estimate Distance to Obstacles](https://gateway.on24.com/wcc/experience/elitenvidiabrill/1407606/3998202/isaac-ros-webinar-series)
+Learn how to use this package by watching our on-demand webinar:
+[Using ML Models in ROS 2 to Robustly Estimate Distance to Obstacles](https://gateway.on24.com/wcc/experience/elitenvidiabrill/1407606/3998202/isaac-ros-webinar-series)
---
## Overview
-Isaac ROS DNN Disparity provides a GPU-accelerated package for DNN-based stereo disparity. Stereo disparity is calculated from a time-synchronized image pair sourced from a stereo camera and is used to produce a depth image or a point cloud of a scene. The `isaac_ros_ess` package uses the [ESS DNN model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/isaac/models/dnn_stereo_disparity) to perform stereo depth estimation via continuous disparity prediction. Given a pair of stereo input images, the package generates a disparity map of the left input image.
+[Isaac ROS DNN Stereo Depth](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_depth) provides a GPU-accelerated package for DNN-based
+stereo disparity. Stereo disparity is calculated from a
+time-synchronized image pair sourced from a stereo camera and is used to
+produce a depth image or a point cloud for a scene. The `isaac_ros_ess`
+package uses the [ESS DNN
+model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/isaac/models/dnn_stereo_disparity)
+to perform stereo depth estimation via continuous disparity prediction.
+Given a pair of stereo input images, the package generates a disparity
+map of the left input image.
-
+
-ESS is used in a graph of nodes to provide a disparity prediction from an input left and right stereo image pair. Images to ESS need to be rectified and resized to the appropriate input resolution. The aspect ratio of the image needs to be maintained, so the image may need to be cropped and resized to maintain the input aspect ratio. The graph for DNN encode, to DNN inference, to DNN decode is part of the ESS node. Inference is performed using TensorRT, as the ESS DNN model is designed to use optimizations supported by TensorRT.
+ESS is used in a graph of nodes to provide a disparity prediction from an input left and right stereo image pair.
+Images to ESS need to be rectified and resized to the appropriate input resolution.
+The aspect ratio of the image is recommended to be maintained, so the image may need to be cropped and resized to maintain the input aspect ratio.
+The graph for DNN encode, DNN inference, and DNN decode is included in the ESS node.
+Inference is performed using TensorRT, as the ESS DNN model is designed with optimizations supported by TensorRT.
+ESS node is agnostic to the model dimension and disparity output has the same dimension as the ESS model.
-### ESS DNN
-
-[ESS](https://arxiv.org/pdf/1803.09719.pdf) stands for Enhanced Semi-Supervised stereo disparity, developed by NVIDIA. The ESS DNN is used to predict the disparity for each pixel from stereo camera image pairs. This network has improvements over classic CV approaches that use epipolar geometry to compute disparity, as the DNN can learn to predict disparity in cases where epipolar geometry feature matching fails. The semi-supervised learning and stereo disparity matching makes the ESS DNN robust in environments unseen in the training datasets and with occluded objects. This DNN is optimized for and evaluated with color (RGB) global shutter stereo camera images, and accuracy may vary with monochrome stereo images used in analytic computer vision approaches to stereo disparity.
-
-The predicted [disparity](https://en.wikipedia.org/wiki/Binocular_disparity) values represent the distance a point moves from one image to the other in a stereo image pair (a.k.a. the binocular image pair). The disparity is inversely proportional to the depth (i.e. `disparity = focalLength x baseline / depth`). Given the [focal length](https://en.wikipedia.org/wiki/Focal_length) and [baseline](https://en.wikipedia.org/wiki/Stereo_camera) of the camera that generates a stereo image pair, the predicted disparity map from the `isaac_ros_ess` package can be used to compute depth and generate a [point cloud](https://en.wikipedia.org/wiki/Point_cloud).
-
-> **Note**: Compare the requirements of your use case against the package [input limitations](#input-restrictions).
-
-### Isaac ROS NITROS Acceleration
+## Isaac ROS NITROS Acceleration
This package is powered by [NVIDIA Isaac Transport for ROS (NITROS)](https://developer.nvidia.com/blog/improve-perception-performance-for-ros-2-applications-with-nvidia-isaac-transport-for-ros/), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.
## Performance
-The following table summarizes the per-platform performance statistics of sample graphs that use this package, with links included to the full benchmark output. These benchmark configurations are taken from the [Isaac ROS Benchmark](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark#list-of-isaac-ros-benchmarks) collection, based on the [`ros2_benchmark`](https://github.com/NVIDIA-ISAAC-ROS/ros2_benchmark) framework.
-
-| Sample Graph | Input Size | AGX Orin | Orin NX | x86_64 w/ RTX 4060 Ti |
-| --------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
-| [DNN Stereo Disparity Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_ess_node.py) | 1080p | [63.6 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-agx_orin.json)
3.5 ms | [24.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-orin_nx.json)
5.0 ms | [173 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-nuc_4060ti.json)
3.4 ms |
-| [DNN Stereo Disparity Graph](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_ess_graph.py) | 1080p | [52.7 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-agx_orin.json)
21 ms | [20.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-orin_nx.json)
50 ms | [156 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-nuc_4060ti.json)
10 ms |
-
-
-## Table of Contents
-
-- [Isaac ROS DNN Stereo Disparity](#isaac-ros-dnn-stereo-disparity)
- - [Webinar Available](#webinar-available)
- - [Overview](#overview)
- - [ESS DNN](#ess-dnn)
- - [Isaac ROS NITROS Acceleration](#isaac-ros-nitros-acceleration)
- - [Performance](#performance)
- - [Table of Contents](#table-of-contents)
- - [Latest Update](#latest-update)
- - [Supported Platforms](#supported-platforms)
- - [Docker](#docker)
- - [Quickstart](#quickstart)
- - [Next Steps](#next-steps)
- - [Try Advanced Examples](#try-advanced-examples)
- - [Try NITROS-Accelerated Graph with Argus](#try-nitros-accelerated-graph-with-argus)
- - [Use Different Models](#use-different-models)
- - [Customize Your Dev Environment](#customize-your-dev-environment)
- - [Package Reference](#package-reference)
- - [`isaac_ros_ess`](#isaac_ros_ess)
- - [Overview](#overview-1)
- - [Usage](#usage)
- - [ROS Parameters](#ros-parameters)
- - [ROS Topics Subscribed](#ros-topics-subscribed)
- - [ROS Topics Published](#ros-topics-published)
- - [Input Restrictions](#input-restrictions)
- - [Output Interpretations](#output-interpretations)
- - [Troubleshooting](#troubleshooting)
- - [Isaac ROS Troubleshooting](#isaac-ros-troubleshooting)
- - [Deep Learning Troubleshooting](#deep-learning-troubleshooting)
- - [Package not found while launching the visualizer script](#package-not-found-while-launching-the-visualizer-script)
- - [Symptom](#symptom)
- - [Solution](#solution)
- - [Problem reserving CacheChange in reader](#problem-reserving-cachechange-in-reader)
- - [Symptom](#symptom-1)
- - [Solution](#solution-1)
- - [Updates](#updates)
-
-## Latest Update
-
-Update 2023-05-25: Upgraded model (1.1.0).
-
-## Supported Platforms
-
-This package is designed and tested to be compatible with ROS 2 Humble running on [Jetson](https://developer.nvidia.com/embedded-computing) or an x86_64 system with an NVIDIA GPU.
-
-> **Note**: Versions of ROS 2 earlier than Humble are **not** supported. This package depends on specific ROS 2 implementation features that were only introduced beginning with the Humble release.
-
-| Platform | Hardware | Software | Notes |
-| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Jetson | [AGX Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/)
[Orin Nano](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/) | [JetPack 5.1.1](https://developer.nvidia.com/embedded/jetpack) | For best performance, ensure that [power settings](https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/PlatformPowerAndPerformance.html) are configured appropriately. |
-| x86_64 | NVIDIA GPU | [Ubuntu 20.04+](https://releases.ubuntu.com/20.04/)
[CUDA 11.8+](https://developer.nvidia.com/cuda-downloads) |
-
-### Docker
-
-To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following [these steps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md). This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.
-
-> **Note**: All Isaac ROS quick start guides, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.
-
-## Quickstart
-
-1. Set up your development environment by following the instructions [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md).
-
-2. Clone this repository and its dependencies under `~/workspaces/isaac_ros-dev/src`:
-
- ```bash
- mkdir -p ~/workspaces/isaac_ros-dev/src && cd ~/workspaces/isaac_ros-dev/src
- ```
-
- ```bash
- git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common
- ```
-
- ```bash
- git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros
- ```
-
- ```bash
- git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
- ```
-
- ```bash
- git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_disparity
- ```
-
-3. Pull down a ROS Bag of sample data:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity && \
- git lfs pull -X "" -I "resources/rosbags/ess_rosbag"
- ```
-
-4. Launch the Docker container using the `run_dev.sh` script:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && ./scripts/run_dev.sh
- ```
-
-5. Download the pre-trained ESS model from the [ESS model page](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/isaac/models/dnn_stereo_disparity):
-
- ```bash
- cd /workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources && \
- wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/dnn_stereo_disparity/versions/1.1.0/files/ess.etlt'
- ```
-
-6. Convert the encrypted model (`.etlt`) to a TensorRT engine plan:
-
- ```bash
- /opt/nvidia/tao/tao-converter -k ess -t fp16 -e /workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine -o output_left /workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.etlt
- ```
-
-7. Build and source the workspace:
-
- ```bash
- cd /workspaces/isaac_ros-dev && \
- colcon build && \
- source install/setup.bash
- ```
-
- **Note**: We recommend rebuilding the package each time the source files are edited. Before rebuilding, first clean the workspace by running `rm -r build install log`.
-
-8. (Optional) Run tests to verify complete and correct installation:
-
- ```bash
- colcon test --executor sequential
- ```
-
-9. Launch the ESS Disparity Node:
-
- ```bash
- ros2 launch isaac_ros_ess isaac_ros_ess.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine
- ```
-
-10. Open a **second terminal** and attach to the container:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
- ./scripts/run_dev.sh && \
- source install/setup.bash
- ```
-
-11. In the **second terminal**, visualize and validate the output of the package:
+| Sample Graph
| Input Size
| AGX Orin
| Orin NX
| x86_64 w/ RTX 4060 Ti
|
+|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [DNN Stereo Disparity Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts/isaac_ros_ess_node.py)
Full
| 576p
| [78.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-agx_orin.json)
4.1 ms
| [27.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-orin_nx.json)
6.2 ms
| [204 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_node-nuc_4060ti.json)
4.4 ms
|
+| [DNN Stereo Disparity Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts/isaac_ros_light_ess_node.py)
Light
| 288p
| [288 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_node-agx_orin.json)
5.6 ms
| [128 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_node-orin_nx.json)
5.6 ms
| [350 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_node-nuc_4060ti.json)
4.2 ms
|
+| [DNN Stereo Disparity Graph](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts/isaac_ros_ess_graph.py)
Full
| 576p
| [74.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-agx_orin.json)
20 ms
| [26.1 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-orin_nx.json)
42 ms
| [191 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_ess_graph-nuc_4060ti.json)
11 ms
|
+| [DNN Stereo Disparity Graph](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts/isaac_ros_light_ess_graph.py)
Light
| 288p
| [260 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_graph-agx_orin.json)
13 ms
| [116 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_graph-orin_nx.json)
16 ms
| [350 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_light_ess_graph-nuc_4060ti.json)
12 ms
|
- ```bash
- ros2 run isaac_ros_ess isaac_ros_ess_visualizer.py --enable_rosbag
- ```
-
- 
-
-## Next Steps
-
-### Try Advanced Examples
-
-To continue exploring the DNN Stereo Disparity package, check out the following suggested examples:
-
-- [Generating disparity maps from a stereo pair of image files](./docs/visualize-image.md)
-- [Tutorial with RealSense and isaac_ros_ess](./docs/tutorial-ess-realsense.md)
-- [Tutorial with Isaac Sim](./docs/tutorial-isaac-sim.md)
-
-### Try NITROS-Accelerated Graph with Argus
-
-If you have an Argus-compatible camera, you can launch the NITROS-accelerated graph by following the [tutorial](docs/tutorial-nitros-graph.md).
-
-### Use Different Models
-
-Click [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/main/docs/model-preparation.md) for more information about how to use NGC models.
-
-### Customize Your Dev Environment
-
-To customize your development environment, reference [this guide](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/modify-dockerfile.md).
-
-## Package Reference
-
-### `isaac_ros_ess`
-
-#### Overview
-
-The `isaac_ros_ess` package offers functionality to generate a stereo disparity map from stereo images using a trained ESS model. Given a pair of stereo input images, the package generates a continuous disparity image for the left input image.
-
-#### Usage
-
-```bash
-ros2 launch isaac_ros_ess isaac_ros_ess.launch.py engine_file_path:=
-```
-
-#### ROS Parameters
-
-| ROS Parameter | Type | Default | Description |
-| ------------------ | -------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| `engine_file_path` | `string` | N/A - Required | The absolute path to the ESS engine file. |
-| `image_type` | `string` | `"RGB_U8"`. | The input image encoding type. Supports `"RGB_U8"` and `"BGR_U8"`.
Note that if this parameter is not specified and there is an upstream Isaac ROS NITROS node, the type will be decided automatically. |
-
-#### ROS Topics Subscribed
-
-| ROS Topic | Interface | Description |
-| ------------------- | -------------------------------------------------------------------------------------------------------------- | --------------------------------- |
-| `left/image_rect` | [sensor_msgs/Image](https://github.com/ros2/common_interfaces/blob/humble/sensor_msgs/msg/Image.msg) | The left image of a stereo pair. |
-| `right/image_rect` | [sensor_msgs/Image](https://github.com/ros2/common_interfaces/blob/humble/sensor_msgs/msg/Image.msg) | The right image of a stereo pair. |
-| `left/camera_info` | [sensor_msgs/CameraInfo](https://github.com/ros2/common_interfaces/blob/humble/sensor_msgs/msg/CameraInfo.msg) | The left camera model. |
-| `right/camera_info` | [sensor_msgs/CameraInfo](https://github.com/ros2/common_interfaces/blob/humble/sensor_msgs/msg/CameraInfo.msg) | The right camera model. |
-> Note: The images on input topics (`left/image_rect` and `right/image_rect`) should be a color image either in `rgb8` or `bgr8` format.
-
-#### ROS Topics Published
-
-| ROS Topic | Interface | Description |
-| ----------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- |
-| `disparity` | [stereo_msgs/DisparityImage](https://github.com/ros2/common_interfaces/blob/humble/stereo_msgs/msg/DisparityImage.msg) | The continuous stereo disparity estimation. |
-
-#### Input Restrictions
-
-1. The input left and right images must have the **same dimension and resolution**, and the resolution must be **no larger than `1920x1200`**.
-
-2. Each input pair (`left/image_rect`, `right/image_rect`, `left/camera_info` and `right/camera_info`) should have the **same timestamp**; otherwise, the synchronizing module inside the ESS Disparity Node will drop the input with smaller timestamps.
-
-#### Output Interpretations
-
-1. The `isaas_ros_ess` package outputs a disparity image with the same resolution as the input stereo pairs. The input images are rescaled to the ESS model input size before inferencing and the output prediction is rescaled back before publishing. To alter this behavior, use input images with the model-native resolution: `W=960, H=576`.
-
-2. The left and right `CameraInfo` are used to composite a `stereo_msgs/DisparityImage`. If you only care about the disparity image, and don't need the baseline and focal length information, you can pass dummy camera messages.
-
-## Troubleshooting
-
-### Isaac ROS Troubleshooting
-
-For solutions to problems with Isaac ROS, check [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/troubleshooting.md).
-
-### Deep Learning Troubleshooting
-
-For solutions to problems using DNN models, check [here](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/main/docs/troubleshooting.md).
-
-### Package not found while launching the visualizer script
-
-#### Symptom
-
-```bash
-$ ros2 run isaac_ros_ess isaac_ros_ess_visualizer.py
-Package 'isaac_ros_ess' not found
-```
-
-#### Solution
-
-Use the `colcon build --packages-up-to isaac_ros_ess` command to build `isaac_ros_ess`; do not use the `--symlink-install` option. Run `source install/setup.bash` after the build.
-
-### Problem reserving CacheChange in reader
+---
-#### Symptom
+## Documentation
-When using a ROS bag as input, `isaac_ros_ess` throws an error if the input topics are published too fast:
+Please visit the [Isaac ROS Documentation](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/index.html) to learn how to use this repository.
-```bash
-[component_container-1] 2022-06-24 09:04:43.584 [RTPS_MSG_IN Error] (ID:281473268431152) Problem reserving CacheChange in reader: 01.0f.cd.10.ab.f2.65.b6.01.00.00.00|0.0.20.4 -> Function processDataMsg
-```
+---
-#### Solution
+## Packages
-Make sure that the ROS bag has a reasonable size and publish rate.
+* [`isaac_ros_ess`](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/isaac_ros_ess/index.html)
+ * [Quickstart](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/isaac_ros_ess/index.html#quickstart)
+ * [Try More Examples](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/isaac_ros_ess/index.html#try-more-examples)
+ * [Troubleshooting](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/isaac_ros_ess/index.html#troubleshooting)
+ * [API](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_stereo_depth/isaac_ros_ess/index.html#api)
-## Updates
+## Latest
-| Date | Changes |
-| ---------- | ------------------------------------------ |
-| 2023-05-25 | Upgraded model (1.1.0) |
-| 2023-04-05 | Source available GXF extensions |
-| 2022-10-19 | Updated OSS licensing |
-| 2022-08-31 | Update to be compatible with JetPack 5.0.2 |
-| 2022-06-30 | Initial release |
+Update 2023-10-18: Updated for ESS 3.0 with confidence thresholding in multiple resolutions.
diff --git a/docs/tutorial-ess-realsense.md b/docs/tutorial-ess-realsense.md
deleted file mode 100644
index cf6afbf..0000000
--- a/docs/tutorial-ess-realsense.md
+++ /dev/null
@@ -1,34 +0,0 @@
-# Tutorial for DNN Stereo Depth Estimation using a RealSense camera
-
-
-
-## Overview
-
-This tutorial demonstrates how to perform stereo-camera-based reconstruction using a [RealSense](https://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html) camera and [isaac_ros_ess](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_disparity).
-
-> **Note**: This tutorial requires a compatible RealSense camera from the list available [here](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md#camera-compatibility).
-
-## Tutorial Walkthrough
-
-1. Complete the [realsense setup tutorial](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/realsense-setup.md).
-2. Complete steps 1-2 described in the [Quickstart Guide](../README.md#quickstart).
-3. Open a new terminal and launch the Docker container using the `run_dev.sh` script:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
- ./scripts/run_dev.sh
- ```
-
-4. Build and source the workspace:
-
- ```bash
- cd /workspaces/isaac_ros-dev && \
- colcon build --symlink-install && \
- source install/setup.bash
- ```
-
-5. Run the launch file, which launches the example and waits for 10 seconds:
-
- ```bash
- ros2 launch isaac_ros_ess isaac_ros_ess_realsense.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine
- ```
diff --git a/docs/tutorial-isaac-sim.md b/docs/tutorial-isaac-sim.md
deleted file mode 100644
index 675bc76..0000000
--- a/docs/tutorial-isaac-sim.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Tutorial for DNN Stereo Depth Estimation with Isaac Sim
-
-
-
-## Overview
-
-This tutorial walks you through a graph to [estimate depth](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_disparity) with stereo images from Isaac Sim.
-
-Last validated with [Isaac Sim 2022.2.1](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/release_notes.html#id1)
-
-## Tutorial Walkthrough
-
-1. Complete steps 1-7 listed in the [Quickstart section](../README.md#quickstart) of the main README.
-2. Install and launch Isaac Sim following the steps in the [Isaac ROS Isaac Sim Setup Guide](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/isaac-sim-sil-setup.md)
-3. Open the Isaac ROS Common USD scene (using the *Content* tab) located at:
-
- ```text
- http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.2.1/Isaac/Samples/ROS2/Scenario/carter_warehouse_apriltags_worker.usd
- ```
-
- Wait for the scene to load completely.
-4. Go to the *Stage* tab and select `/World/Carter_ROS/ROS_Cameras/ros2_create_camera_right_info`, then in *Property* tab *-> OmniGraph Node -> Inputs -> stereoOffset X* change `0` to `-175.92`.
- 
-5. Enable the right camera for a stereo image pair. Go to the *Stage* tab and select `/World/Carter_ROS/ROS_Cameras/enable_camera_right`, then tick the *Condition* checkbox.
- 
-6. Press **Play** to start publishing data from the Isaac Sim application.
- 
-7. Open a second terminal and attach to the container:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
- ./scripts/run_dev.sh
- ```
-
-8. In the second terminal, start the `isaac_ros_ess` node using the launch files:
-
- ```bash
- ros2 launch isaac_ros_ess isaac_ros_ess_isaac_sim.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine
- ```
-
-9. Optionally, you can run the visualizer script to visualize the disparity image.
-
- ```bash
- ros2 run isaac_ros_ess isaac_ros_ess_visualizer.py
- ```
-
- 
diff --git a/docs/tutorial-nitros-graph.md b/docs/tutorial-nitros-graph.md
deleted file mode 100644
index 967c5a2..0000000
--- a/docs/tutorial-nitros-graph.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Tutorial to Run NITROS-Accelerated Graph with Argus Camera
-
-```mermaid
-graph LR;
- argus_node("ArgusStereoNode (Raw Image)") --> left_rectify_node("RectifyNode (Rectified Image)");
- argus_node --> right_rectify_node("RectifyNode (Rectified Image)");
- left_rectify_node --> ess_node("ESSDisparityNode (DNN Inference)");
- right_rectify_node --> ess_node;
- ess_node --> point_cloud_point("PointCloudNode (Point Cloud Output)");
-```
-
-If you have an [Argus-compatible camera](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_argus_camera), you can also use the launch file provided in this package to start a fully NITROS-accelerated stereo disparity graph.
-
-To start the graph, follow the steps below:
-
-1. Follow the [Quickstart section](../README.md#quickstart) up to step 6 in the main README.
-
-2. Outside the container, clone an additional repository required to run Argus-compatible camera under `~/workspaces/isaac_ros-dev/src`.
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src
- ```
-
- ```bash
- git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_argus_camera
- ```
-
-3. Inside the container, build and source the workspace:
-
- ```bash
- cd /workspaces/isaac_ros-dev && \
- colcon build && \
- source install/setup.bash
- ```
-
-4. (Optional) Run tests to verify complete and correct installation:
-
- ```bash
- colcon test --executor sequential
- ```
-
-5. Launch ESS Disparity Node:
-
- ```bash
- ros2 launch isaac_ros_ess isaac_ros_argus_ess.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine
diff --git a/docs/visualize-image.md b/docs/visualize-image.md
deleted file mode 100644
index 25f3f4c..0000000
--- a/docs/visualize-image.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Instructions to Generate Disparity Maps for Stereo Images
-
-These are instructions for generating a disparity map for a given stereo image pair.
-
-In addition to supporting the ROS Bag input type, the `isaac_ros_ess_visualizer.py` script also supports raw input images and camera info files. Follow the steps to generate a disparity estimation from raw inputs:
-
-1. Complete the [Quickstart](../README.md#quickstart) guide first.
-
-2. Pull the example data:
-
- ```bash
- cd ~/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity && \
- git lfs pull -X "" -I "resources/examples"
- ```
-
-3. Launch the ESS Disparity Node:
-
- ```bash
- ros2 launch isaac_ros_ess isaac_ros_ess.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/isaac_ros_dnn_stereo_disparity/resources/ess.engine
- ```
-
-4. Visualize and validate the output of the package:
-
- ```bash
- ros2 run isaac_ros_ess isaac_ros_ess_visualizer.py --raw_inputs
- ```
-
- 
-
-5. Try your own examples:
-
- ```bash
- ros2 run isaac_ros_ess isaac_ros_ess_visualizer.py --raw_inputs \
- --left_image_path '' \
- --right_image_path '' \
- --camera_info_path ''
- ```
diff --git a/isaac_ros_ess/CMakeLists.txt b/isaac_ros_ess/CMakeLists.txt
index 1ab88da..126790a 100644
--- a/isaac_ros_ess/CMakeLists.txt
+++ b/isaac_ros_ess/CMakeLists.txt
@@ -15,7 +15,7 @@
#
# SPDX-License-Identifier: Apache-2.0
-cmake_minimum_required(VERSION 3.23.2)
+cmake_minimum_required(VERSION 3.22.1)
project(isaac_ros_ess LANGUAGES C CXX)
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
@@ -28,13 +28,15 @@ ament_auto_find_build_dependencies()
# isaac_ros_ess_node
ament_auto_add_library(isaac_ros_ess_node SHARED src/ess_disparity_node.cpp)
target_link_libraries(isaac_ros_ess_node)
-rclcpp_components_register_nodes(isaac_ros_ess_node "nvidia::isaac_ros::dnn_stereo_disparity::ESSDisparityNode")
-set(node_plugins "${node_plugins}nvidia::isaac_ros::dnn_stereo_disparity::ESSDisparityNode;$\n")
+rclcpp_components_register_nodes(isaac_ros_ess_node "nvidia::isaac_ros::dnn_stereo_depth::ESSDisparityNode")
+set(node_plugins "${node_plugins}nvidia::isaac_ros::dnn_stereo_depth::ESSDisparityNode;$\n")
### Install ESS extension built from source
add_subdirectory(gxf/ess)
install(TARGETS gxf_cvcore_ess DESTINATION share/${PROJECT_NAME}/gxf/lib/ess)
+add_subdirectory(gxf/thresholder)
+install(TARGETS gxf_thresholder DESTINATION share/${PROJECT_NAME}/gxf/lib/thresholder)
### End extensions
@@ -45,8 +47,15 @@ if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()
+
+ # The FindPythonInterp and FindPythonLibs modules are removed
+ if(POLICY CMP0148)
+ cmake_policy(SET CMP0148 OLD)
+ endif()
+
find_package(launch_testing_ament_cmake REQUIRED)
add_launch_test(test/isaac_ros_ess_test.py TIMEOUT "300")
+ add_launch_test(test/isaac_ros_ess_test_1_16HD_model.py TIMEOUT "300")
endif()
ament_auto_package(INSTALL_TO_SHARE launch config)
diff --git a/isaac_ros_ess/config/ess_inference.yaml b/isaac_ros_ess/config/ess_inference.yaml
index 9c27871..1fe00f1 100644
--- a/isaac_ros_ess/config/ess_inference.yaml
+++ b/isaac_ros_ess/config/ess_inference.yaml
@@ -129,7 +129,7 @@ components:
receiver: data_receiver_right
min_size: 1
- name: adapter
- type: nvidia::cvcore::tensor_ops::ImageAdapter
+ type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: VideoBuffer
- name: data_transmitter
@@ -137,6 +137,15 @@ components:
parameters:
capacity: 12
policy: 0
+- name: confidence_output
+ type: nvidia::gxf::DoubleBufferTransmitter
+ parameters:
+ capacity: 12
+ policy: 0
+- type: nvidia::gxf::DownstreamReceptiveSchedulingTerm
+ parameters:
+ transmitter: confidence_output
+ min_size: 1
- type: nvidia::gxf::DownstreamReceptiveSchedulingTerm
parameters:
transmitter: data_transmitter
@@ -147,27 +156,73 @@ components:
storage_type: 1
block_size: 18432000
num_blocks: 40
-- type: nvidia::cvcore::ESS
+- type: nvidia::isaac::ESSInference
parameters:
- output_name: disparity
+ output_name: frame
pool: pool
left_image_receiver: data_receiver_left
right_image_receiver: data_receiver_right
output_transmitter: data_transmitter
+ confidence_transmitter: confidence_output
output_adapter: adapter
image_type: RGB_U8
pixel_mean: [-128, -128, -128]
normalization: [0.00392156862, 0.00392156862, 0.00392156862]
standard_deviation: [0.5, 0.5, 0.5]
max_batch_size: 1
- input_layer_width: 960
- input_layer_height: 576
model_input_type: RGB_U8
+ onnx_file_path: onnx_file_path_placeholder
engine_file_path: engine_file_path_placeholder
input_layers_name: [input_left, input_right]
- output_layers_name: [output_left]
+ output_layers_name: [output_left, output_conf]
preprocess_type: RESIZE
---
+name: disparity_thresholder
+components:
+- name: image_input
+ type: nvidia::gxf::DoubleBufferReceiver
+ parameters:
+ capacity: 12
+ policy: 0
+- type: nvidia::gxf::MessageAvailableSchedulingTerm
+ parameters:
+ receiver: image_input
+ min_size: 1
+- name: confidence_input
+ type: nvidia::gxf::DoubleBufferReceiver
+ parameters:
+ capacity: 12
+ policy: 0
+- type: nvidia::gxf::MessageAvailableSchedulingTerm
+ parameters:
+ receiver: confidence_input
+ min_size: 1
+- name: masked_output
+ type: nvidia::gxf::DoubleBufferTransmitter
+ parameters:
+ capacity: 12
+ policy: 0
+- type: nvidia::gxf::DownstreamReceptiveSchedulingTerm
+ parameters:
+ transmitter: masked_output
+ min_size: 1
+- name: allocator
+ type: nvidia::gxf::BlockMemoryPool
+ parameters:
+ storage_type: 1
+ block_size: 9216000
+ num_blocks: 40
+- name: image_thresholder
+ type: nvidia::isaac::VideoBufferThresholder
+ parameters:
+ image_input: image_input
+ video_buffer_name: frame
+ mask_input: confidence_input
+ masked_output: masked_output
+ allocator: allocator
+ threshold: 0.9
+ fill_value_float: -1.0
+---
name: disparity_compositor
components:
- name: disparity_in
@@ -212,7 +267,7 @@ components:
storage_type: 1
block_size: 18432000
num_blocks: 40
-- type: nvidia::isaac_ros::DisparityCompositor
+- type: nvidia::isaac::DisparityCompositor
parameters:
left_camera_model_receiver: left_cam_receiver
right_camera_model_receiver: right_cam_receiver
@@ -258,6 +313,14 @@ components:
- type: nvidia::gxf::Connection
parameters:
source: ess/data_transmitter
+ target: disparity_thresholder/image_input
+- type: nvidia::gxf::Connection
+ parameters:
+ source: ess/confidence_output
+ target: disparity_thresholder/confidence_input
+- type: nvidia::gxf::Connection
+ parameters:
+ source: disparity_thresholder/masked_output
target: disparity_compositor/disparity_in
- type: nvidia::gxf::Connection
parameters:
diff --git a/isaac_ros_ess/config/isaac_ros_ess_isaac_sim.rviz b/isaac_ros_ess/config/isaac_ros_ess_isaac_sim.rviz
index a4a4e69..69d2995 100644
--- a/isaac_ros_ess/config/isaac_ros_ess_isaac_sim.rviz
+++ b/isaac_ros_ess/config/isaac_ros_ess_isaac_sim.rviz
@@ -7,7 +7,7 @@ Panels:
- /Global Options1
- /Status1
Splitter Ratio: 0.5
- Tree Height: 271
+ Tree Height: 335
- Class: rviz_common/Selection
Name: Selection
- Class: rviz_common/Tool Properties
@@ -93,7 +93,7 @@ Visualization Manager:
Durability Policy: Volatile
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /rgb_left
+ Value: /front_stereo_camera/left_rgb/image_resize
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -107,7 +107,7 @@ Visualization Manager:
Durability Policy: Volatile
History Policy: Keep Last
Reliability Policy: Reliable
- Value: /rgb_right
+ Value: /front_stereo_camera/left_rgb/image_resize
Value: true
Enabled: true
Global Options:
@@ -155,35 +155,35 @@ Visualization Manager:
Views:
Current:
Class: rviz_default_plugins/Orbit
- Distance: 0.9596342444419861
+ Distance: 4.150443077087402
Enable Stereo Rendering:
Stereo Eye Separation: 0.05999999865889549
Stereo Focal Distance: 1
Swap Stereo Eyes: false
Value: false
Focal Point:
- X: 0.10659492015838623
- Y: 0.048834092915058136
- Z: 0.3884413540363312
+ X: 0.616879940032959
+ Y: -0.006036696955561638
+ Z: 1.438449501991272
Focal Shape Fixed Size: true
Focal Shape Size: 0.05000000074505806
Invert Z Axis: false
Name: Current View
Near Clip Distance: 0.009999999776482582
- Pitch: 0.030397988855838776
+ Pitch: -0.09460201114416122
Target Frame:
Value: Orbit (rviz)
- Yaw: 3.1303980350494385
+ Yaw: 3.11539888381958
Saved: ~
Window Geometry:
Displays:
collapsed: false
- Height: 945
+ Height: 1066
Hide Left Dock: false
Hide Right Dock: false
Left Image:
collapsed: false
- QMainWindow State: 000000ff00000000fd00000004000000000000015600000317fc020000000afb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003b00000198000000c700fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb00000014004c00650066007400200049006d00610067006501000001d9000000ba0000002800fffffffb000000160052006900670068007400200049006d0061006700650100000299000000b90000002800ffffff000000010000010f00000317fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003b00000317000000a000fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000005790000003efc0100000002fb0000000800540069006d00650100000000000005790000024400fffffffb0000000800540069006d00650100000000000004500000000000000000000003080000031700000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ QMainWindow State: 000000ff00000000fd0000000400000000000001b300000390fc020000000afb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003b000001d8000000c700fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb00000014004c00650066007400200049006d0061006700650100000219000000d90000002800fffffffb000000160052006900670068007400200049006d00610067006501000002f8000000d30000002800ffffff000000010000010000000390fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003b00000390000000a000fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000007800000003efc0100000002fb0000000800540069006d00650100000000000007800000024400fffffffb0000000800540069006d00650100000000000004500000000000000000000004c10000039000000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Right Image:
collapsed: false
Selection:
@@ -194,6 +194,6 @@ Window Geometry:
collapsed: false
Views:
collapsed: false
- Width: 1401
- X: 1285
- Y: 238
+ Width: 1920
+ X: 0
+ Y: 267
diff --git a/isaac_ros_ess/config/isaac_ros_ess_zed.rviz b/isaac_ros_ess/config/isaac_ros_ess_zed.rviz
new file mode 100644
index 0000000..7c5e75c
--- /dev/null
+++ b/isaac_ros_ess/config/isaac_ros_ess_zed.rviz
@@ -0,0 +1,197 @@
+Panels:
+ - Class: rviz_common/Displays
+ Help Height: 138
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ Splitter Ratio: 0.5
+ Tree Height: 480
+ - Class: rviz_common/Selection
+ Name: Selection
+ - Class: rviz_common/Tool Properties
+ Expanded:
+ - /2D Goal Pose1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz_common/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+ - Class: rviz_common/Time
+ Experimental: false
+ Name: Time
+ SyncMode: 0
+ SyncSource: PointCloud2
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz_default_plugins/Grid
+ Color: 160; 160; 164
+ Enabled: true
+ Line Style:
+ Line Width: 0.029999999329447746
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /zed_node/left/image_rect_color_rgb
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /zed_node/right/image_rect_color_rgb
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz_default_plugins/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: RGB8
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 4096
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.009999999776482582
+ Style: Flat Squares
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /points2
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Fixed Frame: zed2i_camera_center
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz_default_plugins/Interact
+ Hide Inactive Objects: true
+ - Class: rviz_default_plugins/MoveCamera
+ - Class: rviz_default_plugins/Select
+ - Class: rviz_default_plugins/FocusCamera
+ - Class: rviz_default_plugins/Measure
+ Line color: 128; 128; 0
+ - Class: rviz_default_plugins/SetInitialPose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
+ - Class: rviz_default_plugins/SetGoal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /goal_pose
+ - Class: rviz_default_plugins/PublishPoint
+ Single click: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
+ Transformation:
+ Current:
+ Class: rviz_default_plugins/TF
+ Value: true
+ Views:
+ Current:
+ Class: rviz_default_plugins/Orbit
+ Distance: 5.037819862365723
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: 0
+ Y: 0
+ Z: 0
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: 0.29539817571640015
+ Target Frame:
+ Value: Orbit (rviz)
+ Yaw: 2.8435826301574707
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 2272
+ Hide Left Dock: false
+ Hide Right Dock: false
+ Image:
+ collapsed: false
+ QMainWindow State: 000000ff00000000fd0000000400000000000005a8000007f1fc020000000afb0000001200530065006c0065006300740069006f006e00000001e10000009b000000ab00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c0061007900730100000069000002ce0000017800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000343000002870000004500fffffffb0000000a0049006d00610067006501000005d6000002840000004500ffffff000000010000014f000007f1fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a005600690065007700730000000069000007f10000012300fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e1000001970000000300000e6c00000051fc0100000002fb0000000800540069006d0065010000000000000e6c0000045300fffffffb0000000800540069006d00650100000000000004500000000000000000000008b8000007f100000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Time:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: false
+ Width: 3692
+ X: 148
+ Y: 54
diff --git a/isaac_ros_ess/config/realsense.yaml b/isaac_ros_ess/config/realsense.yaml
index 372d496..422ff0a 100644
--- a/isaac_ros_ess/config/realsense.yaml
+++ b/isaac_ros_ess/config/realsense.yaml
@@ -7,7 +7,7 @@ rgb_camera:
color_qos: "SENSOR_DATA"
depth_module:
- profile: '640x480x60'
+ profile: '1280x720x30'
emitter_enabled: 0
emitter_on_off: false
depth_qos: "SENSOR_DATA"
diff --git a/isaac_ros_ess/config/zed.yaml b/isaac_ros_ess/config/zed.yaml
new file mode 100644
index 0000000..d06a05f
--- /dev/null
+++ b/isaac_ros_ess/config/zed.yaml
@@ -0,0 +1,163 @@
+# config/common_yaml
+# Common parameters to Stereolabs ZED and ZED mini cameras
+#
+# Note: the parameter svo_file is passed as exe argumet
+---
+/**:
+ ros__parameters:
+
+ general:
+ svo_file: "" # usually overwritten by launch file
+ svo_loop: false # Enable loop mode when using an SVO as input source
+ svo_realtime: true # if true the SVO will be played trying to respect the original framerate eventually skipping frames, otherwise every frame will be processed respecting the `pub_frame_rate` setting
+ camera_timeout_sec: 5
+ camera_max_reconnect: 5
+ camera_flip: false
+ zed_id: 0 # IMPORTANT: to be used in simulation to distinguish individual cameras im multi-camera configurations - usually overwritten by launch file
+ serial_number: 0 # usually overwritten by launch file
+ pub_resolution: 'MEDIUM' # The resolution used for output. 'HD2K', 'HD1080', 'HD1200', 'HD720', 'MEDIUM', 'SVGA', 'VGA', 'LOW'
+ pub_frame_rate: 15.0 # [DYNAMIC] - frequency of publishing of visual images and depth images
+ gpu_id: -1
+ region_of_interest: '[]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
+ #region_of_interest: '[[0.25,0.33],[0.75,0.33],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
+ #region_of_interest: '[[0.25,0.25],[0.75,0.25],[0.75,0.75],[0.25,0.75]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
+ #region_of_interest: '[[0.5,0.25],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
+ sdk_verbose: 1
+
+ video:
+ brightness: 4 # [DYNAMIC] Not available for ZED X/ZED X Mini
+ contrast: 4 # [DYNAMIC] Not available for ZED X/ZED X Mini
+ hue: 0 # [DYNAMIC] Not available for ZED X/ZED X Mini
+ saturation: 4 # [DYNAMIC]
+ sharpness: 4 # [DYNAMIC]
+ gamma: 8 # [DYNAMIC]
+ auto_exposure_gain: true # [DYNAMIC]
+ exposure: 80 # [DYNAMIC]
+ gain: 80 # [DYNAMIC]
+ auto_whitebalance: true # [DYNAMIC]
+ whitebalance_temperature: 42 # [DYNAMIC] - [28,65] works only if `auto_whitebalance` is false
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ depth:
+ depth_mode: 'NONE' # Matches the ZED SDK setting: 'NONE', 'PERFORMANCE', 'QUALITY', 'ULTRA', 'NEURAL' - Note: if 'NONE' all the modules that requires depth extraction are disabled by default (Pos. Tracking, Obj. Detection, Mapping, ...)
+ depth_stabilization: 1 # Forces positional tracking to start if major than 0 - Range: [0,100]
+ openni_depth_mode: false # 'false': 32bit float [meters], 'true': 16bit unsigned int [millimeters]
+ point_cloud_freq: 10.0 # [DYNAMIC] - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)
+ depth_confidence: 50 # [DYNAMIC]
+ depth_texture_conf: 100 # [DYNAMIC]
+ remove_saturated_areas: true # [DYNAMIC]
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ pos_tracking:
+ pos_tracking_enabled: false # True to enable positional tracking from start
+ imu_fusion: true # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
+ publish_tf: true # [usually overwritten by launch file] publish `odom -> base_link` TF
+ publish_map_tf: true # [usually overwritten by launch file] publish `map -> odom` TF
+ publish_imu_tf: true # [usually overwritten by launch file] enable/disable the IMU TF broadcasting
+ base_frame: "base_link" # usually overwritten by launch file
+ map_frame: "map"
+ odometry_frame: "odom"
+ area_memory_db_path: ""
+ area_memory: true # Enable to detect loop closure
+ depth_min_range: 0.0 # Set this value for removing fixed zones of the robot in the FoV of the camerafrom the visual odometry evaluation
+ set_as_static: false # If 'true' the camera will be static and not move in the environment
+ set_gravity_as_origin: true # If 'true' align the positional tracking world to imu gravity measurement. Keep the yaw from the user initial pose.
+ floor_alignment: false # Enable to automatically calculate camera/floor offset
+ initial_base_pose: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # Initial position of the `base_frame` in the map -> [X, Y, Z, R, P, Y]
+ init_odom_with_first_valid_pose: true # Enable to initialize the odometry with the first valid pose
+ path_pub_rate: 2.0 # [DYNAMIC] - Camera trajectory publishing frequency
+ path_max_count: -1 # use '-1' for unlimited path size
+ two_d_mode: false # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
+ fixed_z_value: 0.00 # Value to be used for Z coordinate if `two_d_mode` is true
+ transform_time_offset: 0.0 # The value added to the timestamp of `map->odom` and `odom->base_link`` transform being generated
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ gnss_fusion:
+ gnss_fusion_enabled: false # fuse 'sensor_msg/NavSatFix' message information into pose data
+ gnss_fix_topic: "/gps/fix" # Name of the GNSS topic of type NavSatFix to subscribe [Default: "/gps/fix"]
+ gnss_init_distance: 5.0 # The minimum distance traveled by the robot required to initilize the GNSS fusion
+ gnss_zero_altitude: false # Set to `true` to ignore GNSS altitude information
+ gnss_frame: "gnss_link" # [usually overwritten by launch file] The TF frame of the GNSS sensor
+ publish_utm_tf: true # Publish `utm` -> `map` TF
+ broadcast_utm_transform_as_parent_frame: false # if 'true' publish `utm` -> `map` TF, otherwise `map` -> `utm`
+
+ mapping:
+ mapping_enabled: false # True to enable mapping and fused point cloud pubblication
+ resolution: 0.05 # maps resolution in meters [min: 0.01f - max: 0.2f]
+ max_mapping_range: 5.0 # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
+ fused_pointcloud_freq: 1.0 # frequency of the publishing of the fused colored point cloud
+ clicked_point_topic: "/clicked_point" # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ sensors:
+ sensors_image_sync: false # Synchronize Sensors messages with latest published video/depth message
+ sensors_pub_rate: 200. # frequency of publishing of sensors data. MAX: 400. - MIN: grab rate
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT -
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ object_detection:
+ od_enabled: false # True to enable Object Detection
+ model: 'MULTI_CLASS_BOX_MEDIUM' # 'MULTI_CLASS_BOX_FAST', 'MULTI_CLASS_BOX_MEDIUM', 'MULTI_CLASS_BOX_ACCURATE', 'PERSON_HEAD_BOX_FAST', 'PERSON_HEAD_BOX_ACCURATE'
+ allow_reduced_precision_inference: true # Allow inference to run at a lower precision to improve runtime and memory usage
+ max_range: 20.0 # [m] Defines a upper depth range for detections
+ confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of an object [0,99]
+ prediction_timeout: 0.5 # During this time [sec], the object will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
+ filtering_mode: 1 # '0': NONE - '1': NMS3D - '2': NMS3D_PER_CLASS
+ mc_people: false # [DYNAMIC] - Enable/disable the detection of persons for 'MULTI_CLASS_X' models
+ mc_vehicle: true # [DYNAMIC] - Enable/disable the detection of vehicles for 'MULTI_CLASS_X' models
+ mc_bag: true # [DYNAMIC] - Enable/disable the detection of bags for 'MULTI_CLASS_X' models
+ mc_animal: true # [DYNAMIC] - Enable/disable the detection of animals for 'MULTI_CLASS_X' models
+ mc_electronics: true # [DYNAMIC] - Enable/disable the detection of electronic devices for 'MULTI_CLASS_X' models
+ mc_fruit_vegetable: true # [DYNAMIC] - Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_X' models
+ mc_sport: true # [DYNAMIC] - Enable/disable the detection of sport-related objects for 'MULTI_CLASS_X' models
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ body_tracking:
+ bt_enabled: false # True to enable Body Tracking
+ model: 'HUMAN_BODY_MEDIUM' # 'HUMAN_BODY_FAST', 'HUMAN_BODY_MEDIUM', 'HUMAN_BODY_ACCURATE'
+ body_format: 'BODY_38' # 'BODY_18','BODY_34','BODY_38','BODY_70'
+ allow_reduced_precision_inference: false # Allow inference to run at a lower precision to improve runtime and memory usage
+ max_range: 20.0 # [m] Defines a upper depth range for detections
+ body_kp_selection: 'FULL' # 'FULL', 'UPPER_BODY'
+ enable_body_fitting: false # Defines if the body fitting will be applied
+ enable_tracking: true # Defines if the object detection will track objects across images flow
+ prediction_timeout_s: 0.5 # During this time [sec], the skeleton will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
+ confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of skeleton key points [0,99]
+ minimum_keypoints_threshold: 5 # [DYNAMIC] - Minimum number of skeleton key points to be detected for a valid skeleton
+ qos_history: 1 # '1': KEEP_LAST - '2': KEEP_ALL
+ qos_depth: 1 # Queue size if using KEEP_LAST
+ qos_reliability: 1 # '1': RELIABLE - '2': BEST_EFFORT
+ qos_durability: 2 # '1': TRANSIENT_LOCAL - '2': VOLATILE
+
+ use_sim_time: false # EXPERIMENTAL (only for development) - Set to true to enable SIMULATION mode #
+ sim_address: '192.168.1.90' # EXPERIMENTAL (only for development) - The local address of the machine running the simulator
+
+ debug:
+ debug_common: false
+ debug_video_depth: false
+ debug_camera_controls: false
+ debug_point_cloud: false
+ debug_positional_tracking: false
+ debug_gnss: false
+ debug_sensors: false
+ debug_mapping : false
+ debug_terrain_mapping : false
+ debug_object_detection : false
+ debug_body_tracking : false
\ No newline at end of file
diff --git a/isaac_ros_ess/gxf/ess/3dv/include/cv/ess/ESS.h b/isaac_ros_ess/gxf/ess/3dv/include/cv/ess/ESS.h
deleted file mode 100644
index 277165d..0000000
--- a/isaac_ros_ess/gxf/ess/3dv/include/cv/ess/ESS.h
+++ /dev/null
@@ -1,203 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef NV_ESS_H_
-#define NV_ESS_H_
-
-#include
-
-#include
-
-#include
-#include
-#include
-#include
-
-namespace cvcore { namespace ess {
-
-/**
- * Describes the algorithm supported for ESS Preprocessing
- */
-enum class PreProcessType : uint8_t
-{
- RESIZE = 0, // Resize to network dimensions without maintaining aspect ratio
- CENTER_CROP // Crop to network dimensions from center of image
-};
-
-/**
- * Describes the parameters for ESS Preprocessing
- */
-struct ESSPreProcessorParams
-{
- /* Preprocessing steps for ESS */
- PreProcessType preProcessType;
-};
-
-/**
- * Default parameters for the preprocessing pipeline.
- */
-CVCORE_API extern const ImagePreProcessingParams defaultPreProcessorParams;
-
-/**
- * Default parameters to describe the input expected for the model.
- */
-CVCORE_API extern const ModelInputParams defaultModelInputParams;
-
-/**
- * Default parameters to describe the model inference parameters.
- */
-CVCORE_API extern const ModelInferenceParams defaultInferenceParams;
-
-/**
- * Default parameters for the ESS Preprocessing
- */
-CVCORE_API extern const ESSPreProcessorParams defaultESSPreProcessorParams;
-
-/*
- * Interface for running pre-processing on ESS model.
- */
-class CVCORE_API ESSPreProcessor
-{
-public:
- /**
- * Default constructor is deleted
- */
- ESSPreProcessor() = delete;
-
- /**
- * Constructor of ESSPreProcessor.
- * @param preProcessorParams image pre-processing parameters.
- * @param modelInputParams model paramters for network.
- * @param essPreProcessorParams paramaters specific for ess preprocessing.
- */
- ESSPreProcessor(const ImagePreProcessingParams &preProcessorParams, const ModelInputParams &modelInputParams,
- const ESSPreProcessorParams &essPreProcessorParams);
-
- /**
- * Destructor of ESSPreProcessor.
- */
- ~ESSPreProcessor();
-
- /**
- * Main interface to run pre-processing.
- * @param stream cuda stream.
- */
-
- void execute(Tensor &leftOutput, Tensor &rightOutput,
- const Tensor &leftInput, const Tensor &rightInput,
- cudaStream_t stream = 0);
-
-private:
- /**
- * Implementation of ESSPreProcessor.
- */
- struct ESSPreProcessorImpl;
- std::unique_ptr m_pImpl;
-};
-
-/**
- * ESS parameters and implementation
- */
-class CVCORE_API ESS
-{
-public:
- /**
- * Constructor for ESS.
- * @param imgparams image pre-processing parameters.
- * @param modelInputParams model parameters for network.
- * @param modelInferParams model input inference parameters.
- * @param essPreProcessorParams paramaters specific for ess preprocessing.
- */
- ESS(const ImagePreProcessingParams &imgparams, const ModelInputParams &modelParams,
- const ModelInferenceParams &modelInferParams, const ESSPreProcessorParams &essPreProcessorParams);
-
- /**
- * Default constructor not supported.
- */
- ESS() = delete;
-
- /**
- * Destructor for ESS.
- */
- ~ESS();
-
- /**
- * Inference function for a given BGR image
- * @param disparityMap Disparity map (CPU/GPU tensor supported)
- * @param leftInput RGB/BGR Interleaved Left image (Only GPU Input Tensor
- * supported)
- * @param rightInput RGB/BGR Interleaved Right image (Only GPU Input Tensor
- * supported)
- * @param stream Cuda stream
- */
- void execute(Tensor &disparityMap, const Tensor &leftInput,
- const Tensor &rightInput, cudaStream_t stream = 0);
-
- /**
- * Inference function for a given Preprocessed image
- * @param disparityMap Disparity map (CPU/GPU tensor supported)
- * @param leftInput RGB Planar Left image resized to network input dimensions (Only GPU Input Tensor
- * supported)
- * @param rightInput RGB Planar Right image resized to network input dimensions (Only GPU Input Tensor
- * supported)
- * @param stream Cuda stream
- */
- void execute(Tensor &disparityMap, const Tensor &leftInput,
- const Tensor &rightInput, cudaStream_t stream = 0);
-
-private:
- struct ESSImpl;
- std::unique_ptr m_pImpl;
-};
-
-/**
- * ESS parameters and implementation
- */
-class CVCORE_API ESSPostProcessor
-{
-public:
- /**
- * Constructor for ESS.
- * @param modelInputParams model parameters for network.
- */
- ESSPostProcessor(const ModelInputParams &modelParams);
- /**
- * Default constructor not supported.
- */
- ESSPostProcessor() = delete;
-
- /**
- * Destructor for ESS.
- */
- ~ESSPostProcessor();
-
- /**
- * Inference function for a given BGR image
- * @param outputdisparityMap Disparity map rescaled to orginal resolution (CPU/GPU tensor)
- * @param inputDisparityMap input Disparity map (GPU tensor)
- * @param stream Cuda stream
- */
- void execute(Tensor &outputdisparityMap, const Tensor &inputdisparityMap,
- cudaStream_t stream = 0);
-
-private:
- struct ESSPostProcessorImpl;
- std::unique_ptr m_pImpl;
-};
-
-}} // namespace cvcore::ess
-#endif
diff --git a/isaac_ros_ess/gxf/ess/3dv/src/ESS.cpp b/isaac_ros_ess/gxf/ess/3dv/src/ESS.cpp
deleted file mode 100644
index 151545c..0000000
--- a/isaac_ros_ess/gxf/ess/3dv/src/ESS.cpp
+++ /dev/null
@@ -1,230 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#include
-
-#include
-
-#ifdef NVBENCH_ENABLE
-#include
-#endif
-
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-namespace cvcore { namespace ess {
-
-/* Default parameters used for the model provided*/
-const ImagePreProcessingParams defaultPreProcessorParams = {
- BGR_U8, /**< Input type of image.*/
- {-128, -128, -128}, /**< Mean value */
- {1.0 / 255.0, 1.0 / 255.0, 1.0 / 255.0}, /**< Normalization factor */
- {0.5, 0.5, 0.5}}; /**< Standard deviation */
-
-const ModelInputParams defaultModelInputParams = {1, /**< Max batch size supported */
- 960, /**< Input layer width of the network */
- 576, /**< Input layer height of the network */
- RGB_U8}; /**< Input image type the network is trained with */
-
-const ModelInferenceParams defaultInferenceParams = {
- "models/ess.engine", /**< Engine file path */
- {"input_left", "input_right"}, /**< Input layer name of the model */
- {"output_left"}}; /**< Output layer name of the network */
-
-const ESSPreProcessorParams defaultESSPreProcessorParams = {
- PreProcessType::RESIZE}; // Resize the input image to the network input dimensions
-
-struct ESS::ESSImpl
-{
- inferencer::TensorRTInferenceParams tensorrtParams;
- inferencer::InferenceBackendClient client;
-
- // Model includes 2 input layers and 1 output layer
- Tensor m_outputDevice;
- Tensor m_inputLeftPlanar;
- Tensor m_inputRightPlanar;
-
- // Preprocess and PostProcess Objects
- std::unique_ptr m_preprocess;
- std::unique_ptr m_postprocess;
-
- // Max batch Size supported
- size_t m_maxBatchSize;
-
- std::string m_leftInputLayerName, m_rightInputLayerName;
-
- size_t m_networkInputWidth, m_networkInputHeight;
-
- ESSImpl(const ImagePreProcessingParams &imgParams, const ModelInputParams &modelParams,
- const ModelInferenceParams &modelInferParams, const ESSPreProcessorParams &essParams)
- : m_maxBatchSize(modelParams.maxBatchSize)
- {
- if (modelInferParams.inputLayers.size() != 2 || modelInferParams.outputLayers.size() != 1 ||
- modelParams.maxBatchSize <= 0)
- {
- throw ErrorCode::INVALID_ARGUMENT;
- }
-
- // Initialize Preprocessor and postprocessor
- m_preprocess.reset(new ESSPreProcessor(imgParams, modelParams, essParams));
- m_postprocess.reset(new ESSPostProcessor(modelParams));
-
- // Initialize TRT backend
- tensorrtParams = {inferencer::TRTInferenceType::TRT_ENGINE,
- nullptr,
- modelInferParams.engineFilePath,
- modelParams.maxBatchSize,
- modelInferParams.inputLayers,
- modelInferParams.outputLayers};
-
- std::error_code err =
- inferencer::InferenceBackendFactory::CreateTensorRTInferenceBackendClient(client, tensorrtParams);
-
- if (err != cvcore::make_error_code(cvcore::ErrorCode::SUCCESS))
- {
- throw err;
- }
-
- inferencer::ModelMetaData modelInfo = client->getModelMetaData();
-
- m_networkInputHeight = modelParams.inputLayerHeight;
- m_networkInputWidth = modelParams.inputLayerWidth;
- m_inputLeftPlanar = {m_networkInputWidth, m_networkInputHeight, modelParams.maxBatchSize, false};
- m_inputRightPlanar = {m_networkInputWidth, m_networkInputHeight, modelParams.maxBatchSize, false};
- size_t outputWidth = modelInfo.outputLayers[modelInferParams.outputLayers[0]].shape[2];
- size_t outputHeight = modelInfo.outputLayers[modelInferParams.outputLayers[0]].shape[1];
- m_outputDevice = {outputWidth, outputHeight, modelParams.maxBatchSize, false};
- m_leftInputLayerName = modelInferParams.inputLayers[0];
- m_rightInputLayerName = modelInferParams.inputLayers[1];
- CHECK_ERROR_CODE(client->setInput(m_inputLeftPlanar, modelInferParams.inputLayers[0]));
- CHECK_ERROR_CODE(client->setInput(m_inputRightPlanar, modelInferParams.inputLayers[1]));
- CHECK_ERROR_CODE(client->setOutput(m_outputDevice, modelInferParams.outputLayers[0]));
- }
-
- ~ESSImpl()
- {
- CHECK_ERROR_CODE(client->unregister());
- inferencer::InferenceBackendFactory::DestroyTensorRTInferenceBackendClient(client);
- }
-
- void execute(Tensor &output, const Tensor &inputLeft,
- const Tensor &inputRight, cudaStream_t stream)
- {
- size_t batchSize = inputLeft.getDepth();
- if (inputLeft.isCPU() || inputRight.isCPU())
- {
- throw std::invalid_argument("ESS : Input type should be GPU buffer");
- }
-
- if (inputLeft.getDepth() > m_maxBatchSize || inputRight.getDepth() > m_maxBatchSize)
- {
- throw std::invalid_argument("ESS : Input batch size cannot exceed max batch size\n");
- }
-
- if (inputLeft.getDepth() != inputRight.getDepth() || output.getDepth() != inputLeft.getDepth())
- {
- throw std::invalid_argument("ESS : Batch size of input and output images don't match!\n");
- }
- m_preprocess->execute(m_inputLeftPlanar, m_inputRightPlanar, inputLeft, inputRight, stream);
-
-#ifdef NVBENCH_ENABLE
- size_t inputWidth = inputLeft.getWidth();
- size_t inputHeight = inputLeft.getHeight();
- const std::string testName = "ESSInference_batch" + std::to_string(batchSize) + "_" +
- std::to_string(inputWidth) + "x" + std::to_string(inputHeight) + "x" +
- std::to_string(inputLeft.getChannelCount()) + "_GPU";
- nv::bench::Timer timerFunc = nv::bench::GPU(testName.c_str(), nv::bench::Flag::DEFAULT, stream);
-#endif
-
- CHECK_ERROR_CODE(client->setCudaStream(stream));
- CHECK_ERROR_CODE(client->infer(batchSize));
- // PostProcess
- m_postprocess->execute(output, m_outputDevice, stream);
- }
-
- void execute(Tensor &output, const Tensor &inputLeft,
- const Tensor &inputRight, cudaStream_t stream)
- {
- size_t batchSize = inputLeft.getDepth();
- if (inputLeft.isCPU() || inputRight.isCPU())
- {
- throw std::invalid_argument("ESS : Input type should be GPU buffer");
- }
-
- if (inputLeft.getDepth() > m_maxBatchSize || inputRight.getDepth() > m_maxBatchSize)
- {
- throw std::invalid_argument("ESS : Input batch size cannot exceed max batch size\n");
- }
-
- if (inputLeft.getDepth() != inputRight.getDepth() || output.getDepth() != inputLeft.getDepth())
- {
- throw std::invalid_argument("ESS : Batch size of input and output images don't match!\n");
- }
-
- if (inputLeft.getWidth() != m_networkInputWidth || inputLeft.getHeight() != m_networkInputHeight)
- {
- throw std::invalid_argument("ESS : Left preprocessed input does not match network input dimensions!\n");
- }
-
- if (inputRight.getWidth() != m_networkInputWidth || inputRight.getHeight() != m_networkInputHeight)
- {
- throw std::invalid_argument("ESS : Right preprocessed input does not match network input dimensions!\n");
- }
-#ifdef NVBENCH_ENABLE
- size_t inputWidth = inputLeft.getWidth();
- size_t inputHeight = inputLeft.getHeight();
- const std::string testName = "ESSInference_batch" + std::to_string(batchSize) + "_" +
- std::to_string(inputWidth) + "x" + std::to_string(inputHeight) + "x" +
- std::to_string(inputLeft.getChannelCount()) + "_GPU";
- nv::bench::Timer timerFunc = nv::bench::GPU(testName.c_str(), nv::bench::Flag::DEFAULT, stream);
-#endif
- // inference
- CHECK_ERROR_CODE(client->setInput(inputLeft, m_leftInputLayerName));
- CHECK_ERROR_CODE(client->setInput(inputRight, m_rightInputLayerName));
-
- CHECK_ERROR_CODE(client->setCudaStream(stream));
- CHECK_ERROR_CODE(client->infer(batchSize));
- // PostProcess
- m_postprocess->execute(output, m_outputDevice, stream);
- }
-};
-
-ESS::ESS(const ImagePreProcessingParams &imgParams, const ModelInputParams &modelParams,
- const ModelInferenceParams &modelInferParams, const ESSPreProcessorParams &essParams)
- : m_pImpl(new ESSImpl(imgParams, modelParams, modelInferParams, essParams))
-{
-}
-
-ESS::~ESS() {}
-
-void ESS::execute(Tensor &output, const Tensor &inputLeft,
- const Tensor &inputRight, cudaStream_t stream)
-{
- m_pImpl->execute(output, inputLeft, inputRight, stream);
-}
-
-void ESS::execute(Tensor &output, const Tensor &inputLeft,
- const Tensor &inputRight, cudaStream_t stream)
-{
- m_pImpl->execute(output, inputLeft, inputRight, stream);
-}
-}} // namespace cvcore::ess
diff --git a/isaac_ros_ess/gxf/ess/3dv/src/ESSPostProcess.cpp b/isaac_ros_ess/gxf/ess/3dv/src/ESSPostProcess.cpp
deleted file mode 100644
index ed823a0..0000000
--- a/isaac_ros_ess/gxf/ess/3dv/src/ESSPostProcess.cpp
+++ /dev/null
@@ -1,129 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#include
-
-#include
-
-#ifdef NVBENCH_ENABLE
-#include
-#endif
-
-#include
-#include
-#include
-#include
-
-namespace cvcore { namespace ess {
-
-struct ESSPostProcessor::ESSPostProcessorImpl
-{
- ESSPostProcessorImpl(const ModelInputParams &modelParams)
- : m_maxBatchSize(modelParams.maxBatchSize)
- , m_networkWidth(modelParams.inputLayerWidth)
- , m_networkHeight(modelParams.inputLayerHeight)
- {
- m_scaledDisparityDevice = {m_networkWidth, m_networkHeight, m_maxBatchSize, false};
- m_outputDisparityDevice = {m_networkWidth, m_networkHeight, m_maxBatchSize, false};
- }
-
- void resizeBuffers(std::size_t width, std::size_t height)
- {
- if (m_outputDisparityDevice.getWidth() == width && m_outputDisparityDevice.getHeight() == height)
- {
- return;
- }
- m_outputDisparityDevice = {width, height, m_maxBatchSize, false};
- }
-
- void process(Tensor &outputDisparity, const Tensor &inputDisparity,
- cudaStream_t stream)
- {
- if (inputDisparity.isCPU())
- {
- throw std::invalid_argument("ESSPostProcessor : Input Tensor must be GPU Tensor.");
- }
-
- if (inputDisparity.getWidth() != m_networkWidth || inputDisparity.getHeight() != m_networkHeight)
- {
- throw std::invalid_argument(
- "ESSPostProcessor : Input Tensor dimension "
- "does not match network input "
- "requirement");
- }
-
- if (inputDisparity.getDepth() != outputDisparity.getDepth())
- {
- throw std::invalid_argument("ESSPostProcessor : Input/Output Tensor batchsize mismatch.");
- }
-
- const size_t batchSize = inputDisparity.getDepth();
- if (batchSize > m_maxBatchSize)
- {
- throw std::invalid_argument("ESSPostProcessor : Input batchsize exceeds Max Batch size.");
- }
- const size_t outputWidth = outputDisparity.getWidth();
- const size_t outputHeight = outputDisparity.getHeight();
-
- // Disparity map values are scaled based on the outputWidth/networkInputWidth ratio
- const float scale = static_cast(outputWidth) / m_networkWidth;
- Tensor scaledDisparity(m_scaledDisparityDevice.getWidth(), m_scaledDisparityDevice.getHeight(),
- batchSize, m_scaledDisparityDevice.getData(), false);
-
- tensor_ops::Normalize(scaledDisparity, inputDisparity, scale, 0, stream);
- if (!outputDisparity.isCPU())
- {
- tensor_ops::Resize(outputDisparity, m_scaledDisparityDevice, stream);
- }
- else
- {
- resizeBuffers(outputWidth, outputHeight);
- Tensor outputDisparityDevice(m_outputDisparityDevice.getWidth(),
- m_outputDisparityDevice.getHeight(), batchSize,
- m_outputDisparityDevice.getData(), false);
- tensor_ops::Resize(outputDisparityDevice, m_scaledDisparityDevice, stream);
- cvcore::Copy(outputDisparity, outputDisparityDevice, stream);
- CHECK_ERROR(cudaStreamSynchronize(stream));
- }
- }
-
- size_t m_maxBatchSize;
- size_t m_networkWidth, m_networkHeight;
- Tensor m_scaledDisparityDevice;
- Tensor m_outputDisparityDevice;
-};
-
-void ESSPostProcessor::execute(Tensor &outputDisparity, const Tensor &inputDisparity,
- cudaStream_t stream)
-{
-#ifdef NVBENCH_ENABLE
- const std::string testName = "ESSPostProcessor_batch" + std::to_string(outputDisparity.getDepth()) + "_" +
- std::to_string(outputDisparity.getWidth()) + "x" +
- std::to_string(outputDisparity.getHeight()) + "_GPU";
- nv::bench::Timer timerFunc = nv::bench::GPU(testName.c_str(), nv::bench::Flag::DEFAULT, stream);
-#endif
- m_pImpl->process(outputDisparity, inputDisparity, stream);
-}
-
-ESSPostProcessor::ESSPostProcessor(const ModelInputParams &modelInputParams)
- : m_pImpl(new ESSPostProcessor::ESSPostProcessorImpl(modelInputParams))
-{
-}
-
-ESSPostProcessor::~ESSPostProcessor() {}
-
-}} // namespace cvcore::ess
diff --git a/isaac_ros_ess/gxf/ess/3dv/src/ESSPreProcess.cpp b/isaac_ros_ess/gxf/ess/3dv/src/ESSPreProcess.cpp
deleted file mode 100644
index 001c528..0000000
--- a/isaac_ros_ess/gxf/ess/3dv/src/ESSPreProcess.cpp
+++ /dev/null
@@ -1,190 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#include
-
-#include
-
-#ifdef NVBENCH_ENABLE
-#include
-#endif
-
-#include
-#include
-#include
-#include
-
-namespace cvcore { namespace ess {
-
-struct ESSPreProcessor::ESSPreProcessorImpl
-{
- size_t m_maxBatchSize;
- size_t m_outputWidth;
- size_t m_outputHeight;
- PreProcessType m_processType;
- ImagePreProcessingParams m_preProcessorParams;
- Tensor m_resizedDeviceLeftInput;
- Tensor m_resizedDeviceRightInput;
- Tensor m_normalizedDeviceLeftInput;
- Tensor m_normalizedDeviceRightInput;
- bool m_swapRB;
-
- ESSPreProcessorImpl(const ImagePreProcessingParams &imgParams, const ModelInputParams &modelParams,
- const ESSPreProcessorParams &essParams)
- : m_maxBatchSize(modelParams.maxBatchSize)
- , m_outputHeight(modelParams.inputLayerHeight)
- , m_outputWidth(modelParams.inputLayerWidth)
- , m_processType(essParams.preProcessType)
- , m_preProcessorParams(imgParams)
- {
- if (imgParams.imgType != BGR_U8 && imgParams.imgType != RGB_U8)
- {
- throw std::invalid_argument("ESSPreProcessor : Only image types RGB_U8/BGR_U8 are supported\n");
- }
- m_resizedDeviceLeftInput = {modelParams.inputLayerWidth, modelParams.inputLayerHeight, modelParams.maxBatchSize,
- false};
- m_resizedDeviceRightInput = {modelParams.inputLayerWidth, modelParams.inputLayerHeight,
- modelParams.maxBatchSize, false};
- m_normalizedDeviceLeftInput = {modelParams.inputLayerWidth, modelParams.inputLayerHeight,
- modelParams.maxBatchSize, false};
- m_normalizedDeviceRightInput = {modelParams.inputLayerWidth, modelParams.inputLayerHeight,
- modelParams.maxBatchSize, false};
- m_swapRB = imgParams.imgType != modelParams.modelInputType;
- }
-
- void process(Tensor &outputLeft, Tensor &outputRight,
- const Tensor &inputLeft, const Tensor &inputRight, cudaStream_t stream)
- {
- if (inputLeft.isCPU() || inputRight.isCPU() || outputLeft.isCPU() || outputRight.isCPU())
- {
- throw std::invalid_argument("ESSPreProcessor : Input/Output Tensor must be GPU Tensor.");
- }
-
- if (outputLeft.getWidth() != m_outputWidth || outputLeft.getHeight() != m_outputHeight ||
- outputRight.getWidth() != m_outputWidth || outputRight.getHeight() != m_outputHeight)
- {
- throw std::invalid_argument(
- "ESSPreProcessor : Output Tensor dimension does not match network input requirement");
- }
-
- if (inputLeft.getWidth() != inputRight.getWidth() || inputLeft.getHeight() != inputRight.getHeight())
- {
- throw std::invalid_argument("ESSPreProcessor : Input tensor dimensions don't match");
- }
-
- if (outputLeft.getDepth() != inputLeft.getDepth() || outputRight.getDepth() != inputRight.getDepth() ||
- inputLeft.getDepth() != inputRight.getDepth())
- {
- throw std::invalid_argument("ESSPreProcessor : Input/Output Tensor batchsize mismatch.");
- }
-
- if (outputLeft.getDepth() > m_maxBatchSize)
- {
- throw std::invalid_argument("ESSPreProcessor : Input/Output batchsize exceeds max batch size.");
- }
-
- const size_t batchSize = inputLeft.getDepth();
- const size_t inputWidth = inputLeft.getWidth();
- const size_t inputHeight = inputLeft.getHeight();
-
- if (m_processType == PreProcessType::RESIZE)
- {
- tensor_ops::Resize(m_resizedDeviceLeftInput, inputLeft, stream);
- tensor_ops::Resize(m_resizedDeviceRightInput, inputRight, stream);
- }
- else
- {
- const float centerX = inputWidth / 2;
- const float centerY = inputHeight / 2;
- const float offsetX = m_outputWidth / 2;
- const float offsetY = m_outputHeight / 2;
- BBox srcCrop, dstCrop;
- dstCrop = {0, 0, static_cast(m_outputWidth - 1), static_cast(m_outputHeight - 1)};
- srcCrop.xmin = std::max(0, static_cast(centerX - offsetX));
- srcCrop.ymin = std::max(0, static_cast(centerY - offsetY));
- srcCrop.xmax = std::min(static_cast(m_outputWidth - 1), static_cast(centerX + offsetX));
- srcCrop.ymax = std::min(static_cast(m_outputHeight - 1), static_cast(centerY + offsetY));
- for (size_t i = 0; i < batchSize; i++)
- {
- Tensor inputLeftCrop(
- inputWidth, inputHeight,
- const_cast(inputLeft.getData()) + i * inputLeft.getStride(TensorDimension::DEPTH),
- false);
- Tensor outputLeftCrop(
- m_outputWidth, m_outputHeight,
- m_resizedDeviceLeftInput.getData() + i * m_resizedDeviceLeftInput.getStride(TensorDimension::DEPTH),
- false);
- Tensor inputRightCrop(
- inputWidth, inputHeight,
- const_cast(inputRight.getData()) + i * inputRight.getStride(TensorDimension::DEPTH),
- false);
- Tensor outputRightCrop(m_outputWidth, m_outputHeight,
- m_resizedDeviceRightInput.getData() +
- i * m_resizedDeviceRightInput.getStride(TensorDimension::DEPTH),
- false);
- tensor_ops::CropAndResize(outputLeftCrop, inputLeftCrop, dstCrop, srcCrop,
- tensor_ops::InterpolationType::INTERP_LINEAR, stream);
- tensor_ops::CropAndResize(outputRightCrop, inputRightCrop, dstCrop, srcCrop,
- tensor_ops::InterpolationType::INTERP_LINEAR, stream);
- }
- }
-
- if (m_swapRB)
- {
- tensor_ops::ConvertColorFormat(m_resizedDeviceLeftInput, m_resizedDeviceLeftInput, tensor_ops::BGR2RGB,
- stream);
- tensor_ops::ConvertColorFormat(m_resizedDeviceRightInput, m_resizedDeviceRightInput, tensor_ops::BGR2RGB,
- stream);
- }
-
- float scale[3];
- for (size_t i = 0; i < 3; i++)
- {
- scale[i] = m_preProcessorParams.normalization[i] / m_preProcessorParams.stdDev[i];
- }
-
- tensor_ops::Normalize(m_normalizedDeviceLeftInput, m_resizedDeviceLeftInput, scale,
- m_preProcessorParams.pixelMean, stream);
- tensor_ops::Normalize(m_normalizedDeviceRightInput, m_resizedDeviceRightInput, scale,
- m_preProcessorParams.pixelMean, stream);
- tensor_ops::InterleavedToPlanar(outputLeft, m_normalizedDeviceLeftInput, stream);
- tensor_ops::InterleavedToPlanar(outputRight, m_normalizedDeviceRightInput, stream);
- }
-};
-
-void ESSPreProcessor::execute(Tensor &outputLeft, Tensor &outputRight,
- const Tensor &inputLeft, const Tensor &inputRight,
- cudaStream_t stream)
-{
-#ifdef NVBENCH_ENABLE
- const std::string testName = "ESSPreProcessor_batch" + std::to_string(inputLeft.getDepth()) + "_" +
- std::to_string(inputLeft.getWidth()) + "x" + std::to_string(inputLeft.getHeight()) +
- "_GPU";
- nv::bench::Timer timerFunc = nv::bench::GPU(testName.c_str(), nv::bench::Flag::DEFAULT, stream);
-#endif
- m_pImpl->process(outputLeft, outputRight, inputLeft, inputRight, stream);
-}
-
-ESSPreProcessor::ESSPreProcessor(const ImagePreProcessingParams &preProcessorParams,
- const ModelInputParams &modelInputParams, const ESSPreProcessorParams &essParams)
- : m_pImpl(new ESSPreProcessor::ESSPreProcessorImpl(preProcessorParams, modelInputParams, essParams))
-{
-}
-
-ESSPreProcessor::~ESSPreProcessor() {}
-
-}} // namespace cvcore::ess
diff --git a/isaac_ros_ess/gxf/ess/CMakeLists.txt b/isaac_ros_ess/gxf/ess/CMakeLists.txt
index b9a8537..cb8ddfc 100644
--- a/isaac_ros_ess/gxf/ess/CMakeLists.txt
+++ b/isaac_ros_ess/gxf/ess/CMakeLists.txt
@@ -23,7 +23,8 @@ endif()
# Dependencies
find_package(CUDAToolkit)
-include(YamlCpp)
+find_package(yaml-cpp)
+find_package(isaac_ros_image_proc REQUIRED)
find_package(GXF ${ISAAC_ROS_GXF_VERSION} MODULE REQUIRED
COMPONENTS
core
@@ -33,118 +34,51 @@ find_package(GXF ${ISAAC_ROS_GXF_VERSION} MODULE REQUIRED
)
find_package(TENSORRT)
+set(CMAKE_INCLUDE_CURRENT_DIR TRUE)
+
# Create extension
add_library(gxf_cvcore_ess SHARED
- TensorOps.cpp
- ESS.cpp
- extensions/ess/ESS.hpp
+ extensions/ess/ess.cpp
+ extensions/ess/components/ess_inference.cpp
+ extensions/ess/components/ess_inference.hpp
- extensions/tensor_ops/CameraModel.cpp
- extensions/tensor_ops/CameraModel.hpp
- extensions/tensor_ops/ConvertColorFormat.cpp
- extensions/tensor_ops/ConvertColorFormat.hpp
- extensions/tensor_ops/CropAndResize.cpp
- extensions/tensor_ops/CropAndResize.hpp
- extensions/tensor_ops/Frame3D.cpp
- extensions/tensor_ops/Frame3D.hpp
- extensions/tensor_ops/ImageAdapter.cpp
- extensions/tensor_ops/ImageAdapter.hpp
- extensions/tensor_ops/ImageUtils.cpp
- extensions/tensor_ops/ImageUtils.hpp
- extensions/tensor_ops/InterleavedToPlanar.cpp
- extensions/tensor_ops/InterleavedToPlanar.hpp
- extensions/tensor_ops/Normalize.cpp
- extensions/tensor_ops/Normalize.hpp
- extensions/tensor_ops/Reshape.cpp
- extensions/tensor_ops/Reshape.hpp
- extensions/tensor_ops/Resize.cpp
- extensions/tensor_ops/Resize.hpp
- extensions/tensor_ops/TensorOperator.cpp
- extensions/tensor_ops/TensorOperator.hpp
- extensions/tensor_ops/TensorStream.cpp
- extensions/tensor_ops/TensorStream.hpp
- extensions/tensor_ops/Undistort.cpp
- extensions/tensor_ops/Undistort.hpp
- extensions/tensor_ops/detail/ImageAdapterTensorImpl.cpp
- extensions/tensor_ops/detail/ImageAdapterTensorImpl.hpp
- extensions/tensor_ops/detail/ImageAdapterVideoBufferImpl.cpp
- extensions/tensor_ops/detail/ImageAdapterVideoBufferImpl.hpp
+ extensions/ess/inference/ESS.cpp
+ extensions/ess/inference/ESS.h
+ extensions/ess/inference/ESSPostProcess.cpp
+ extensions/ess/inference/ESSPreProcess.cpp
)
+target_include_directories(gxf_cvcore_ess PRIVATE ${isaac_ros_image_proc_INCLUDE_DIRS})
-set(CMAKE_INCLUDE_CURRENT_DIR TRUE)
-target_include_directories(gxf_cvcore_ess PRIVATE
- ${CMAKE_CURRENT_SOURCE_DIR}/cvcore/include
- ${CMAKE_CURRENT_SOURCE_DIR}/3dv/include)
-
-add_library(cvcore_ess STATIC
- # Tensorops
- cvcore/src/tensor_ops/ArithmeticOperations.cpp
- cvcore/src/tensor_ops/BBoxUtils.cpp
- cvcore/src/tensor_ops/ColorConversions.cpp
- cvcore/src/tensor_ops/DBScan.cpp
- cvcore/src/tensor_ops/Errors.cpp
- cvcore/src/tensor_ops/Filters.cpp
- cvcore/src/tensor_ops/Filters.h
- cvcore/src/tensor_ops/FusedOperations.cpp
- cvcore/src/tensor_ops/GeometryTransforms.cpp
- cvcore/src/tensor_ops/IImageWarp.cpp
- cvcore/src/tensor_ops/NppUtils.cpp
- cvcore/src/tensor_ops/NppUtils.h
- cvcore/src/tensor_ops/OneEuroFilter.cpp
- cvcore/src/tensor_ops/TensorOperators.cpp
-
- # Core
- cvcore/src/core/cvcore/Array.cpp
- cvcore/src/core/cvcore/Dummy.cu
- cvcore/src/core/cvcore/MathTypes.cpp
- cvcore/src/core/cvcore/Tensor.cpp
- cvcore/src/core/utility/CVError.cpp
- cvcore/src/core/utility/Instrumentation.cpp
- cvcore/src/core/utility/Memory.cpp
- cvcore/src/core/utility/ProfileUtils.cpp
-
+add_library(corelib STATIC
# Inferencer (ESS only)
- cvcore/src/inferencer/tensorrt/TensorRTInferencer.cpp
- cvcore/src/inferencer/Inferencer.cpp
- cvcore/src/inferencer/Errors.cpp
- cvcore/src/inferencer/tensorrt/TensorRTUtils.h
- cvcore/src/inferencer/tensorrt/TensorRTUtils.cpp
- cvcore/src/inferencer/tensorrt/TensorRTInferencer.h
-
- # TRTBackend
- cvcore/src/trtbackend/TRTBackend.cpp
+ gems/dnn_inferencer/inferencer/TensorRTInferencer.cpp
+ gems/dnn_inferencer/inferencer/TensorRTUtils.h
+ gems/dnn_inferencer/inferencer/TensorRTUtils.cpp
+ gems/dnn_inferencer/inferencer/TensorRTInferencer.h
+ gems/dnn_inferencer/inferencer/Inferencer.cpp
+ gems/dnn_inferencer/inferencer/Errors.cpp
)
+target_include_directories(corelib PRIVATE ${isaac_ros_image_proc_INCLUDE_DIRS})
-target_include_directories(cvcore_ess PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/cvcore/include)
-target_compile_options(cvcore_ess PUBLIC -fPIC)
-target_link_libraries(cvcore_ess PUBLIC
- CUDA::cudart
- CUDA::nppc
- CUDA::nppial
- CUDA::nppicc
- CUDA::nppidei
- CUDA::nppif
- CUDA::nppig
- CUDA::nppisu
- TENSORRT::nvinfer
-)
-
-add_library(ess_3dv STATIC
- 3dv/src/ESS.cpp
- 3dv/src/ESSPostProcess.cpp
- 3dv/src/ESSPreProcess.cpp
-)
-target_include_directories(ess_3dv PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/3dv/include)
-target_compile_options(ess_3dv PUBLIC -fPIC)
-target_link_libraries(ess_3dv PUBLIC
- cvcore_ess
-)
-target_link_libraries(gxf_cvcore_ess
- PUBLIC
+target_link_libraries(corelib PUBLIC
+ GXF::core
GXF::cuda
GXF::multimedia
+ CUDA::cudart
+ CUDA::nppc
+ CUDA::nppial
+ CUDA::nppicc
+ CUDA::nppidei
+ CUDA::nppif
+ CUDA::nppig
+ CUDA::nppisu
+ TENSORRT::nvinfer
yaml-cpp
- PRIVATE
- cvcore_ess
- ess_3dv
)
+
+target_compile_options(gxf_cvcore_ess PUBLIC -fPIC)
+
+target_link_libraries(gxf_cvcore_ess
+ corelib
+ isaac_ros_image_proc::gxf_tensorops
+)
\ No newline at end of file
diff --git a/isaac_ros_ess/gxf/ess/ESS.cpp b/isaac_ros_ess/gxf/ess/ESS.cpp
deleted file mode 100644
index 593e34c..0000000
--- a/isaac_ros_ess/gxf/ess/ESS.cpp
+++ /dev/null
@@ -1,325 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#include "extensions/ess/ESS.hpp"
-
-#include
-#include "gxf/cuda/cuda_stream_id.hpp"
-#include "gxf/cuda/cuda_stream_pool.hpp"
-#include "gxf/multimedia/video.hpp"
-#include "gxf/std/tensor.hpp"
-#include "gxf/std/timestamp.hpp"
-
-namespace nvidia {
-namespace cvcore {
-
-namespace detail {
-
-// Function to bind a cuda stream with cid into downstream message
-gxf_result_t BindCudaStream(gxf::Entity& message, gxf_uid_t cid) {
- if (cid == kNullUid) {
- GXF_LOG_ERROR("stream_cid is null");
- return GXF_FAILURE;
- }
- auto output_stream_id = message.add("stream");
- if (!output_stream_id) {
- GXF_LOG_ERROR("failed to add cudastreamid.");
- return GXF_FAILURE;
- }
- output_stream_id.value()->stream_cid = cid;
- return GXF_SUCCESS;
-}
-
-// Function to record a new cuda event
-gxf_result_t RecordCudaEvent(gxf::Entity& message, gxf::Handle& stream) {
- // Create a new event
- cudaEvent_t cuda_event;
- cudaEventCreateWithFlags(&cuda_event, 0);
- gxf::CudaEvent event;
- auto ret = event.initWithEvent(cuda_event, stream->dev_id(), [](auto) {});
- if (!ret) {
- GXF_LOG_ERROR("failed to init cuda event");
- return GXF_FAILURE;
- }
- // Record the event
- // Can define []() { GXF_LOG_DEBUG("tensorops event synced"); } as callback func for debug purpose
- ret = stream->record(event.event().value(),
- [event = cuda_event, entity = message.clone().value()](auto) { cudaEventDestroy(event); });
- if (!ret) {
- GXF_LOG_ERROR("record event failed");
- return ret.error();
- }
- return GXF_SUCCESS;
-}
-
-} // namespace detail
-
-gxf_result_t ESS::registerInterface(gxf::Registrar* registrar) {
- gxf::Expected result;
-
- result &= registrar->parameter(left_image_name_, "left_image_name", "The name of the left image to be received", "",
- gxf::Registrar::NoDefaultParameter(), GXF_PARAMETER_FLAGS_OPTIONAL);
- result &= registrar->parameter(right_image_name_, "right_image_name", "The name of the right image to be received",
- "", gxf::Registrar::NoDefaultParameter(), GXF_PARAMETER_FLAGS_OPTIONAL);
- result &= registrar->parameter(output_name_, "output_name", "The name of the tensor to be passed to next node", "");
- result &= registrar->parameter(stream_pool_, "stream_pool", "The Cuda Stream pool for allocate cuda stream", "",
- gxf::Registrar::NoDefaultParameter(), GXF_PARAMETER_FLAGS_OPTIONAL);
- result &= registrar->parameter(pool_, "pool", "Memory pool for allocating output data", "");
- result &= registrar->parameter(left_image_receiver_, "left_image_receiver", "Receiver to get the left image", "");
- result &= registrar->parameter(right_image_receiver_, "right_image_receiver", "Receiver to get the right image", "");
- result &= registrar->parameter(output_transmitter_, "output_transmitter", "Transmitter to send the data", "");
- result &= registrar->parameter(output_adapter_, "output_adapter", "Adapter to send output data", "");
-
- result &= registrar->parameter(image_type_, "image_type", "Type of input image: BGR_U8 or RGB_U8", "");
- result &= registrar->parameter(pixel_mean_, "pixel_mean", "The mean for each channel", "");
- result &= registrar->parameter(normalization_, "normalization", "The normalization for each channel", "");
- result &=
- registrar->parameter(standard_deviation_, "standard_deviation", "The standard deviation for each channel", "");
-
- result &= registrar->parameter(max_batch_size_, "max_batch_size", "The max batch size to run inference on", "");
- result &= registrar->parameter(input_layer_width_, "input_layer_width", "The model input layer width", "");
- result &= registrar->parameter(input_layer_height_, "input_layer_height", "The model input layer height", "");
- result &= registrar->parameter(model_input_type_, "model_input_type", "The model input image: BGR_U8 or RGB_U8", "");
-
- result &= registrar->parameter(engine_file_path_, "engine_file_path", "The path to the serialized TRT engine", "");
- result &= registrar->parameter(input_layers_name_, "input_layers_name", "The names of the input layers", "");
- result &= registrar->parameter(output_layers_name_, "output_layers_name", "The names of the output layers", "");
-
- result &= registrar->parameter(preprocess_type_, "preprocess_type",
- "The type of ESS preprocessing: RESIZE / CENTER_CROP", "");
- result &= registrar->parameter(output_width_, "output_width", "The width of output result", "",
- gxf::Registrar::NoDefaultParameter(), GXF_PARAMETER_FLAGS_OPTIONAL);
- result &= registrar->parameter(output_height_, "output_height", "The height of output result", "",
- gxf::Registrar::NoDefaultParameter(), GXF_PARAMETER_FLAGS_OPTIONAL);
- result &= registrar->parameter(timestamp_policy_, "timestamp_policy",
- "Input channel to get timestamp 0(left)/1(right)", "", 0);
-
- return gxf::ToResultCode(result);
-}
-
-gxf_result_t ESS::start() {
- // Allocate cuda stream using stream pool if necessary
- if (stream_pool_.try_get()) {
- auto stream = stream_pool_.try_get().value()->allocateStream();
- if (!stream) {
- GXF_LOG_ERROR("allocating stream failed.");
- return GXF_FAILURE;
- }
- cuda_stream_ = std::move(stream.value());
- if (!cuda_stream_->stream()) {
- GXF_LOG_ERROR("allocated stream is not initialized.");
- return GXF_FAILURE;
- }
- }
- // Setting image pre-processing params for ESS
- const auto& pixel_mean_vec = pixel_mean_.get();
- const auto& normalization_vec = normalization_.get();
- const auto& standard_deviation_vec = standard_deviation_.get();
- if (pixel_mean_vec.size() != 3 || normalization_vec.size() != 3 || standard_deviation_vec.size() != 3) {
- GXF_LOG_ERROR("Invalid preprocessing params.");
- return GXF_FAILURE;
- }
-
- if (image_type_.get() == "BGR_U8") {
- preProcessorParams.imgType = ::cvcore::BGR_U8;
- } else if (image_type_.get() == "RGB_U8") {
- preProcessorParams.imgType = ::cvcore::RGB_U8;
- } else {
- GXF_LOG_INFO("Wrong input image type. BGR_U8 and RGB_U8 are only supported.");
- return GXF_FAILURE;
- }
- std::copy(pixel_mean_vec.begin(), pixel_mean_vec.end(), preProcessorParams.pixelMean);
- std::copy(normalization_vec.begin(), normalization_vec.end(), preProcessorParams.normalization);
- std::copy(standard_deviation_vec.begin(), standard_deviation_vec.end(), preProcessorParams.stdDev);
-
- // Setting model input params for ESS
- modelInputParams.maxBatchSize = max_batch_size_.get();
- modelInputParams.inputLayerWidth = input_layer_width_.get();
- modelInputParams.inputLayerHeight = input_layer_height_.get();
- if (model_input_type_.get() == "BGR_U8") {
- modelInputParams.modelInputType = ::cvcore::BGR_U8;
- } else if (model_input_type_.get() == "RGB_U8") {
- modelInputParams.modelInputType = ::cvcore::RGB_U8;
- } else {
- GXF_LOG_INFO("Wrong model input type. BGR_U8 and RGB_U8 are only supported.");
- return GXF_FAILURE;
- }
-
- // Setting inference params for ESS
- inferenceParams = {engine_file_path_.get(), input_layers_name_.get(), output_layers_name_.get()};
-
- // Setting extra params for ESS
- if (preprocess_type_.get() == "RESIZE") {
- extraParams = {::cvcore::ess::PreProcessType::RESIZE};
- } else if (preprocess_type_.get() == "CENTER_CROP") {
- extraParams = {::cvcore::ess::PreProcessType::CENTER_CROP};
- } else {
- GXF_LOG_ERROR("Invalid preprocessing type.");
- return GXF_FAILURE;
- }
-
- // Setting ESS object with the provided params
- objESS.reset(new ::cvcore::ess::ESS(preProcessorParams, modelInputParams, inferenceParams, extraParams));
-
- return GXF_SUCCESS;
-}
-
-gxf_result_t ESS::tick() {
- // Get a CUDA stream for execution
- cudaStream_t cuda_stream = 0;
- if (!cuda_stream_.is_null()) {
- cuda_stream = cuda_stream_->stream().value();
- }
- // Receiving the data
- auto inputLeftMessage = left_image_receiver_->receive();
- if (!inputLeftMessage) {
- return GXF_FAILURE;
- }
- if (cuda_stream != 0) {
- detail::RecordCudaEvent(inputLeftMessage.value(), cuda_stream_);
- auto inputLeftStreamId = inputLeftMessage.value().get("stream");
- if (inputLeftStreamId) {
- auto inputLeftStream = gxf::Handle::Create(inputLeftStreamId.value().context(),
- inputLeftStreamId.value()->stream_cid);
- // NOTE: This is an expensive call. It will halt the current CPU thread until all events
- // previously associated with the stream are cleared
- if (!inputLeftStream.value()->syncStream()) {
- GXF_LOG_ERROR("sync left stream failed.");
- return GXF_FAILURE;
- }
- }
- }
-
- auto inputRightMessage = right_image_receiver_->receive();
- if (!inputRightMessage) {
- return GXF_FAILURE;
- }
- if (cuda_stream != 0) {
- detail::RecordCudaEvent(inputRightMessage.value(), cuda_stream_);
- auto inputRightStreamId = inputRightMessage.value().get("stream");
- if (inputRightStreamId) {
- auto inputRightStream = gxf::Handle::Create(inputRightStreamId.value().context(),
- inputRightStreamId.value()->stream_cid);
- // NOTE: This is an expensive call. It will halt the current CPU thread until all events
- // previously associated with the stream are cleared
- if (!inputRightStream.value()->syncStream()) {
- GXF_LOG_ERROR("sync right stream failed.");
- return GXF_FAILURE;
- }
- }
- }
-
- auto maybeLeftName = left_image_name_.try_get();
- auto leftInputBuffer = inputLeftMessage.value().get(maybeLeftName ? maybeLeftName.value().c_str() : nullptr);
- if (!leftInputBuffer) {
- return GXF_FAILURE;
- }
- auto maybeRightName = right_image_name_.try_get();
- auto rightInputBuffer = inputRightMessage.value().get(maybeRightName ? maybeRightName.value().c_str() : nullptr);
- if (!rightInputBuffer) {
- return GXF_FAILURE;
- }
- if (leftInputBuffer.value()->storage_type() != gxf::MemoryStorageType::kDevice ||
- rightInputBuffer.value()->storage_type() != gxf::MemoryStorageType::kDevice) {
- GXF_LOG_ERROR("input images must be on GPU.");
- return GXF_FAILURE;
- }
-
- const auto& leftBufferInfo = leftInputBuffer.value()->video_frame_info();
- const auto& rightBufferInfo = rightInputBuffer.value()->video_frame_info();
- if (leftBufferInfo.width != rightBufferInfo.width || leftBufferInfo.height != rightBufferInfo.height ||
- leftBufferInfo.color_format != rightBufferInfo.color_format) {
- GXF_LOG_ERROR("left/right images mismatch.");
- return GXF_FAILURE;
- }
- const size_t outputWidth = output_width_.try_get() ? output_width_.try_get().value() : leftBufferInfo.width;
- const size_t outputHeight = output_height_.try_get() ? output_height_.try_get().value() : leftBufferInfo.height;
-
- // Creating GXF Tensor or VideoBuffer to hold the data to be transmitted
- gxf::Expected outputMessage = gxf::Entity::New(context());
- if (!outputMessage) {
- return outputMessage.error();
- }
- auto error = output_adapter_.get()->AddImageToMessage<::cvcore::ImageType::Y_F32>(
- outputMessage.value(), outputWidth, outputHeight, pool_.get(), false, output_name_.get().c_str());
- if (error != GXF_SUCCESS) {
- return GXF_FAILURE;
- }
- auto outputImage = output_adapter_.get()->WrapImageFromMessage<::cvcore::ImageType::Y_F32>(
- outputMessage.value(), output_name_.get().c_str());
- if (!outputImage) {
- return GXF_FAILURE;
- }
-
- // Creating CVCore Tensors to hold the input and output data
- ::cvcore::Tensor<::cvcore::NHWC, ::cvcore::C1, ::cvcore::F32> outputImageDevice(outputWidth, outputHeight, 1,
- outputImage.value().getData(), false);
-
- // Running the inference
- auto inputColorFormat = leftBufferInfo.color_format;
- if (inputColorFormat == gxf::VideoFormat::GXF_VIDEO_FORMAT_RGB ||
- inputColorFormat == gxf::VideoFormat::GXF_VIDEO_FORMAT_BGR) {
- ::cvcore::Tensor<::cvcore::NHWC, ::cvcore::C3, ::cvcore::U8> leftImageDevice(
- leftBufferInfo.width, leftBufferInfo.height, 1, reinterpret_cast(leftInputBuffer.value()->pointer()),
- false);
- ::cvcore::Tensor<::cvcore::NHWC, ::cvcore::C3, ::cvcore::U8> rightImageDevice(
- leftBufferInfo.width, leftBufferInfo.height, 1, reinterpret_cast(rightInputBuffer.value()->pointer()),
- false);
- objESS->execute(outputImageDevice, leftImageDevice, rightImageDevice, cuda_stream);
- } else if (inputColorFormat == gxf::VideoFormat::GXF_VIDEO_FORMAT_R32_G32_B32 ||
- inputColorFormat == gxf::VideoFormat::GXF_VIDEO_FORMAT_B32_G32_R32) {
- ::cvcore::Tensor<::cvcore::NCHW, ::cvcore::C3, ::cvcore::F32> leftImageDevice(
- leftBufferInfo.width, leftBufferInfo.height, 1, reinterpret_cast(leftInputBuffer.value()->pointer()),
- false);
- ::cvcore::Tensor<::cvcore::NCHW, ::cvcore::C3, ::cvcore::F32> rightImageDevice(
- leftBufferInfo.width, leftBufferInfo.height, 1, reinterpret_cast(rightInputBuffer.value()->pointer()),
- false);
- objESS->execute(outputImageDevice, leftImageDevice, rightImageDevice, cuda_stream);
- } else {
- GXF_LOG_ERROR("invalid input image type.");
- return GXF_FAILURE;
- }
-
- // Allocate a cuda event that can be used to record on each tick
- if (!cuda_stream_.is_null()) {
- detail::BindCudaStream(outputMessage.value(), cuda_stream_.cid());
- detail::RecordCudaEvent(outputMessage.value(), cuda_stream_);
- }
-
- // Pass down timestamp if necessary
- auto maybeDaqTimestamp = timestamp_policy_.get() == 0 ? inputLeftMessage.value().get()
- : inputRightMessage.value().get();
- if (maybeDaqTimestamp) {
- auto outputTimestamp = outputMessage.value().add(maybeDaqTimestamp.value().name());
- if (!outputTimestamp) {
- return outputTimestamp.error();
- }
- *outputTimestamp.value() = *maybeDaqTimestamp.value();
- }
-
- // Send the data
- output_transmitter_->publish(outputMessage.value());
- return GXF_SUCCESS;
-}
-
-gxf_result_t ESS::stop() {
- objESS.reset(nullptr);
- return GXF_SUCCESS;
-}
-
-} // namespace cvcore
-} // namespace nvidia
diff --git a/isaac_ros_ess/gxf/ess/TensorOps.cpp b/isaac_ros_ess/gxf/ess/TensorOps.cpp
deleted file mode 100644
index 4740f02..0000000
--- a/isaac_ros_ess/gxf/ess/TensorOps.cpp
+++ /dev/null
@@ -1,95 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-#include "extensions/ess/ESS.hpp"
-#include "extensions/tensor_ops/CameraModel.hpp"
-#include "extensions/tensor_ops/ConvertColorFormat.hpp"
-#include "extensions/tensor_ops/CropAndResize.hpp"
-#include "extensions/tensor_ops/Frame3D.hpp"
-#include "extensions/tensor_ops/ImageAdapter.hpp"
-#include "extensions/tensor_ops/InterleavedToPlanar.hpp"
-#include "extensions/tensor_ops/Normalize.hpp"
-#include "extensions/tensor_ops/Reshape.hpp"
-#include "extensions/tensor_ops/Resize.hpp"
-#include "extensions/tensor_ops/TensorOperator.hpp"
-#include "extensions/tensor_ops/TensorStream.hpp"
-#include "extensions/tensor_ops/Undistort.hpp"
-
-#include "gxf/std/extension_factory_helper.hpp"
-
-GXF_EXT_FACTORY_BEGIN()
-GXF_EXT_FACTORY_SET_INFO(
- 0x6eae64ff97a94d9b, 0xb324f85e6a98a75a, "NvCvTensorOpsExtension",
- "Generic CVCORE tensor_ops interfaces", "Nvidia_Gxf", "3.1.0", "LICENSE");
-
-GXF_EXT_FACTORY_ADD(
- 0xd073a92344ba4b82, 0xbd0f18f4996048e8, nvidia::cvcore::tensor_ops::CameraModel,
- nvidia::gxf::Component,
- "Construct Camera distortion model / Camera intrinsic compatible with CVCORE");
-
-GXF_EXT_FACTORY_ADD(
- 0x6c9419223e4b4c2c, 0x899a4d65279c6508, nvidia::cvcore::tensor_ops::Frame3D,
- nvidia::gxf::Component,
- "Construct Camera extrinsic compatible with CVCORE");
-
-GXF_EXT_FACTORY_ADD(
- 0xd94385e5b35b4635, 0x9adb0d214a3865f7, nvidia::cvcore::tensor_ops::TensorStream,
- nvidia::gxf::Component, "Wrapper of CVCORE ITensorOperatorStream/ITensorOperatorContext");
-
-GXF_EXT_FACTORY_ADD(
- 0xd0c4ddad486a4a92, 0xb69c8a5304b205ea, nvidia::cvcore::tensor_ops::ImageAdapter,
- nvidia::gxf::Component, "Utility component for conversion between message and cvcore image type");
-
-GXF_EXT_FACTORY_ADD(
- 0xadebc792bd0b4a57, 0x99c1405fd2ea0728, nvidia::cvcore::tensor_ops::StreamUndistort,
- nvidia::gxf::Codelet, "Codelet for stream image undistortion in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0xa58141ac7eca4ea6, 0x9b545446fe379a12, nvidia::cvcore::tensor_ops::Resize, nvidia::gxf::Codelet,
- "Codelet for image resizing in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0xeb8b5f5b36d44b49, 0x81f959fd28e6f678, nvidia::cvcore::tensor_ops::StreamResize,
- nvidia::gxf::Codelet, "Codelet for stream image resizing in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0x4a7ff422de3841bd, 0x9e743ac10d9294b7, nvidia::cvcore::tensor_ops::CropAndResize,
- nvidia::gxf::Codelet, "Codelet for crop and resizing operation in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0x7018f0b9034c462c, 0xa9fbaf7ee012974a, nvidia::cvcore::tensor_ops::Normalize,
- nvidia::gxf::Codelet,
- "Codelet for image normalization in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0x269d4237f3c3479e, 0xbcca9ecc44c71a71, nvidia::cvcore::tensor_ops::InterleavedToPlanar,
- nvidia::gxf::Codelet, "Codelet for convert interleaved image to planar image in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0xfc4d7b4d8fcc4dab, 0xa286056e0fcafa79, nvidia::cvcore::tensor_ops::ConvertColorFormat,
- nvidia::gxf::Codelet, "Codelet for image color conversion in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0x5ab4a4d8f7a34553, 0xa90be52660b076fe, nvidia::cvcore::tensor_ops::StreamConvertColorFormat,
- nvidia::gxf::Codelet, "Codelet for stream image color conversion in tensor_ops");
-
-GXF_EXT_FACTORY_ADD(
- 0x26789b7d5a8d4e85, 0x86b845ec5f4cd12b, nvidia::cvcore::tensor_ops::Reshape, nvidia::gxf::Codelet,
- "Codelet for image reshape in tensor_ops");
-GXF_EXT_FACTORY_ADD(
- 0x1aa1eea914344aff, 0x97fddaaedb594121, nvidia::cvcore::ESS, nvidia::gxf::Codelet,
- "ESS GXF Extension");
-GXF_EXT_FACTORY_END()
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Array.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Array.h
deleted file mode 100644
index c9f23d8..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Array.h
+++ /dev/null
@@ -1,386 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_ARRAY_H
-#define CVCORE_ARRAY_H
-
-#include
-#include
-#include
-
-namespace cvcore {
-
-/**
- * Base implementation of Array.
- */
-class ArrayBase
-{
-public:
- /**
- * Constructor of a non-owning arrays.
- * @param capacity capacity of the array.
- * @param elemSize byte size of each element.
- * @param dataPtr data pointer to the raw source array.
- * @param isCPU whether to allocate the array on CPU or GPU.
- */
- ArrayBase(std::size_t capacity, std::size_t elemSize, void *dataPtr, bool isCPU);
-
- /**
- * Constructor of a memory-owning arrays
- * @param capacity capacity of the array.
- * @param elemSize byte size of each element.
- * @param isCPU whether to allocate the array on CPU or GPU.
- */
- ArrayBase(std::size_t capacity, std::size_t elemSize, bool isCPU);
-
- /**
- * Destructor of ArrayBase.
- */
- ~ArrayBase();
-
- /**
- * ArrayBase is non-copyable.
- */
- ArrayBase(const ArrayBase &) = delete;
-
- /**
- * ArrayBase is non-copyable.
- */
- ArrayBase &operator=(const ArrayBase &) = delete;
-
- /**
- * Move constructor of ArrayBase.
- */
- ArrayBase(ArrayBase &&);
-
- /**
- * Move assignment operator of ArrayBase.
- */
- ArrayBase &operator=(ArrayBase &&);
-
- /**
- * Get the pointer to specified index.
- * @param idx element index.
- * @return pointer to the specified element.
- */
- void *getElement(int idx) const;
-
- /**
- * Get the size of the array.
- * @return size of the array.
- */
- std::size_t getSize() const;
-
- /**
- * Get the capacity of the array.
- * @return size of the array.
- */
- std::size_t getCapacity() const;
-
- /**
- * Get the size of each element.
- * @return size of each element.
- */
- std::size_t getElementSize() const;
-
- /**
- * Set the size of the array.
- * @param size size of the array.
- */
- void setSize(std::size_t size);
-
- /**
- * Get the flag whether the array is CPU or GPU array.
- * @return whether the array is CPU array.
- */
- bool isCPU() const;
-
- /**
- * Get the flag whether the array is owning memory space.
- * @return whether the array owns memory.
- */
- bool isOwning() const;
-
- /**
- * Get the raw pointer to the array data.
- * @return void pointer to the first element of the array.
- */
- void *getData() const;
-
-private:
- ArrayBase();
-
- void *m_data;
- std::size_t m_size;
- std::size_t m_capacity;
- std::size_t m_elemSize;
- bool m_isOwning;
- bool m_isCPU;
-};
-
-/**
- * Implementation of Array class.
- * @tparam T type of element in array.
- */
-template
-class Array : public ArrayBase
-{
-public:
- /**
- * Default constructor of an array.
- */
- Array()
- : ArrayBase{0, sizeof(T), nullptr, true}
- {
- }
-
- /**
- * Constructor of a non-owning array.
- * @param size size of the array.
- * @param capacity capacity of the array.
- * @param dataPtr data pointer to the raw source array.
- * @param isCPU whether to allocate array on CPU or GPU.
- */
- Array(std::size_t size, std::size_t capacity, void *dataPtr, bool isCPU = true)
- : ArrayBase{capacity, sizeof(T), dataPtr, isCPU}
- {
- ArrayBase::setSize(size);
- }
-
- /**
- * Constructor of a memory-owning array.
- * @param capacity capacity of the array.
- * @param isCPU whether to allocate array on CPU or GPU.
- */
- Array(std::size_t capacity, bool isCPU = true)
- : ArrayBase{capacity, sizeof(T), isCPU}
- {
- }
-
- /**
- * Destructor of the Array.
- */
- ~Array()
- {
- // call resize here such that CPU-based destructor
- // will call destructors of the objects stored
- // in the array before deallocating the storage
- setSize(0);
- }
-
- /**
- * Array is non-copyable.
- */
- Array(const Array &) = delete;
-
- /**
- * Array is non-copyable.
- */
- Array &operator=(const Array &) = delete;
-
- /**
- * Move constructor of Array.
- */
- Array(Array &&t)
- : Array()
- {
- *this = std::move(t);
- }
-
- /**
- * Move assignment operator of Array.
- */
- Array &operator=(Array &&t)
- {
- static_cast(*this) = std::move(t);
- return *this;
- }
-
- /**
- * Set size of the Array.
- * @param size size of the Array.
- */
- void setSize(std::size_t size)
- {
- const std::size_t oldSize = getSize();
- ArrayBase::setSize(size);
- if (isCPU())
- {
- // shrinking case
- for (std::size_t i = size; i < oldSize; ++i)
- {
- reinterpret_cast(getElement(i))->~T();
- }
- // expanding case
- for (std::size_t i = oldSize; i < size; ++i)
- {
- new (getElement(i)) T;
- }
- }
- }
-
- /**
- * Const array index operator.
- * @param idx index of element.
- * @return const reference to the specified element.
- */
- const T &operator[](int idx) const
- {
- assert(idx >= 0 && idx < getSize());
- return *reinterpret_cast(getElement(idx));
- }
-
- /**
- * Array index operator.
- * @param idx index of element.
- * @return reference to the specified element.
- */
- T &operator[](int idx)
- {
- assert(idx >= 0 && idx < getSize());
- return *reinterpret_cast(getElement(idx));
- }
-};
-
-/**
- * Implementation of ArrayN class.
- * @tparam T type of element in array.
- * @tparam N capacity of array.
- */
-template
-class ArrayN : public ArrayBase
-{
-public:
- /**
- * Default constructor of ArrayN (create an owning Tensor with capacity N).
- */
- ArrayN()
- : ArrayBase{N, sizeof(T), true}
- {
- setSize(N);
- }
-
- /**
- * Constructor of a non-owning ArrayN.
- * @param size size of the array.
- * @param dataPtr data pointer to the raw source array.
- * @param isCPU whether to allocate array on CPU or GPU.
- */
- ArrayN(std::size_t size, void *dataPtr, bool isCPU = true)
- : ArrayBase{N, sizeof(T), dataPtr, isCPU}
- {
- ArrayBase::setSize(size);
- }
-
- /**
- * Constructor of a memory-owning ArrayN.
- * @param isCPU whether to allocate array on CPU or GPU.
- */
- ArrayN(bool isCPU)
- : ArrayBase{N, sizeof(T), isCPU}
- {
- setSize(N);
- }
-
- /**
- * Destructor of the ArrayN.
- */
- ~ArrayN()
- {
- // call resize here such that CPU-based destructor
- // will call destructors of the objects stored
- // in the array before deallocating the storage
- setSize(0);
- }
-
- /**
- * ArrayN is non-copyable.
- */
- ArrayN(const ArrayN &) = delete;
-
- /**
- * ArrayN is non-copyable.
- */
- ArrayN &operator=(const ArrayN &) = delete;
-
- /**
- * Move constructor of ArrayN.
- */
- ArrayN(ArrayN &&t)
- : ArrayN()
- {
- *this = std::move(t);
- }
-
- /**
- * Move assignment operator of ArrayN.
- */
- ArrayN &operator=(ArrayN &&t)
- {
- static_cast(*this) = std::move(t);
- return *this;
- }
-
- /**
- * Set size of the ArrayN.
- * @param size size of the ArrayN.
- */
- void setSize(std::size_t size)
- {
- const std::size_t oldSize = getSize();
- ArrayBase::setSize(size);
- if (isCPU())
- {
- // shrinking case
- for (std::size_t i = size; i < oldSize; ++i)
- {
- reinterpret_cast(getElement(i))->~T();
- }
- // expanding case
- for (std::size_t i = oldSize; i < size; ++i)
- {
- new (getElement(i)) T;
- }
- }
- }
-
- /**
- * Const ArrayN index operator.
- * @param idx index of element.
- * @return const reference to the specified element.
- */
- const T &operator[](int idx) const
- {
- assert(idx >= 0 && idx < getSize());
- return *reinterpret_cast(getElement(idx));
- }
-
- /**
- * ArrayN index operator.
- * @param idx index of element.
- * @return reference to the specified element.
- */
- T &operator[](int idx)
- {
- assert(idx >= 0 && idx < getSize());
- return *reinterpret_cast(getElement(idx));
- }
-};
-
-} // namespace cvcore
-
-#endif // CVCORE_ARRAY_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/BBox.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/BBox.h
deleted file mode 100644
index 93100d3..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/BBox.h
+++ /dev/null
@@ -1,142 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_BBOX_H
-#define CVCORE_BBOX_H
-
-#include
-#include
-#include
-
-namespace cvcore {
-
-/**
- * A struct.
- * Structure used to store bounding box.
- */
-struct BBox
-{
- int xmin{0}; /**< minimum x coordinate. */
- int ymin{0}; /**< minimum y coordinate. */
- int xmax{0}; /**< maximum x coordinate. */
- int ymax{0}; /**< maximum y coordinate. */
-
- /**
- * Clamp a bounding box based on a restricting clamp box
- * @param Clamping bounding box (xmin, ymin, xmax, ymax)
- * @return Clamped bounding box
- */
- BBox clamp(const BBox &clampBox) const
- {
- BBox outbox;
- outbox.xmin = std::max(clampBox.xmin, xmin);
- outbox.xmax = std::min(clampBox.xmax, xmax);
- outbox.ymin = std::max(clampBox.ymin, ymin);
- outbox.ymax = std::min(clampBox.ymax, ymax);
- return outbox;
- }
-
- /**
- * @return Width of the bounding box
- */
- size_t getWidth() const
- {
- return xmax - xmin;
- }
-
- /**
- * @return Height of the bounding box
- */
- size_t getHeight() const
- {
- return ymax - ymin;
- }
-
- /**
- * Checks if the bounding box is valid.
- */
- bool isValid() const
- {
- return (xmin < xmax) && (ymin < ymax) && (getWidth() > 0) && (getHeight() > 0);
- }
-
- /**
- * Returns the center of the bounding box
- * @return X,Y coordinate tuple
- */
- std::pair getCenter() const
- {
- int centerX = xmin + getWidth() / 2;
- int centerY = ymin + getHeight() / 2;
- return std::pair(centerX, centerY);
- }
-
- /**
- * Scales bounding box based along the width and height retaining the same center.
- * @param Scale in X direction along the width
- * @param Scale in Y direction along the height
- * @return Scaled bounding box
- */
- BBox scale(float scaleW, float scaleH) const
- {
- auto center = getCenter();
- float newW = getWidth() * scaleW;
- float newH = getHeight() * scaleH;
- BBox outbox;
- outbox.xmin = center.first - newW / 2;
- outbox.xmax = center.first + newW / 2;
- outbox.ymin = center.second - newH / 2;
- outbox.ymax = center.second + newH / 2;
-
- return outbox;
- }
-
- /**
- * Resizes bounding box to a square bounding box based on
- * the longest edge and clamps the bounding box based on the limits provided.
- * @param Clamping bounding box (xmin, ymin, xmax, ymax)
- * @return Sqaure bounding box
- */
- BBox squarify(const BBox &clampBox) const
- {
- size_t w = getWidth();
- size_t h = getHeight();
-
- BBox clampedBox1 = clamp(clampBox);
- if (!clampedBox1.isValid())
- {
- throw std::range_error("Invalid bounding box generated\n");
- }
- float scaleW = static_cast(std::max(w, h)) / w;
- float scaleH = static_cast(std::max(w, h)) / h;
- BBox scaledBBox = clampedBox1.scale(scaleW, scaleH);
- BBox clampedBox2 = scaledBBox.clamp(clampBox);
- if (!clampedBox2.isValid())
- {
- throw std::range_error("Invalid bounding box generated\n");
- }
- size_t newW = clampedBox2.getWidth();
- size_t newH = clampedBox2.getHeight();
- size_t minW = std::min(newH, newW);
- clampedBox2.ymax = clampedBox2.ymin + minW;
- clampedBox2.xmax = clampedBox2.xmin + minW;
- return clampedBox2;
- }
-};
-
-} // namespace cvcore
-#endif // CVCORE_BBOX_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CVError.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CVError.h
deleted file mode 100644
index 82c16c1..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CVError.h
+++ /dev/null
@@ -1,116 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_CVERROR_H
-#define CVCORE_CVERROR_H
-
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-namespace cvcore {
-
-// CVCORE ERROR CODES
-// -----------------------------------------------------------------------------
-// Defining the CVCORE Error Codes on std::error_condition
-// std::error_condition creates a set of sub-system independent codes which may
-// be used to describe ANY downstream error in a broad sense. An std::error_code
-// is defined within the sub-system context (i.e. tensor_ops, trtbackend, ...)
-// which is mapped to the cvcore::ErrorCode.
-// As an example, cvcore::ErrorCode -1 may not ABSOLUTELY mean the same as
-// tensor_ops::FaultCode -1, but does mean the same as tensor_ops:FaultCode 4.
-// Thus, tensor_ops::FaultCode 4 needs to be mapped to cvcore::ErrorCode -1.
-enum class ErrorCode : std::int32_t
-{
- SUCCESS = 0,
- NOT_READY,
- NOT_IMPLEMENTED,
- INVALID_ARGUMENT,
- INVALID_IMAGE_FORMAT,
- INVALID_STORAGE_TYPE,
- INVALID_ENGINE_TYPE,
- INVALID_OPERATION,
- DETECTED_NAN_IN_RESULT,
- OUT_OF_MEMORY,
- DEVICE_ERROR,
- SYSTEM_ERROR,
-};
-
-} // namespace cvcore
-
-// WARNING: Extending base C++ namespace to cover cvcore error codes
-namespace std {
-
-template<>
-struct is_error_condition_enum : true_type
-{
-};
-
-template<>
-struct is_error_code_enum : true_type
-{
-};
-
-} // namespace std
-
-namespace cvcore {
-
-std::error_condition make_error_condition(ErrorCode) noexcept;
-
-std::error_code make_error_code(ErrorCode) noexcept;
-
-// -----------------------------------------------------------------------------
-
-inline void CheckCudaError(cudaError_t code, const char *file, const int line)
-{
- if (code != cudaSuccess)
- {
- const char *errorMessage = cudaGetErrorString(code);
- const std::string message = "CUDA error returned at " + std::string(file) + ":" + std::to_string(line) +
- ", Error code: " + std::to_string(code) + " (" + std::string(errorMessage) + ")";
- throw std::runtime_error(message);
- }
-}
-
-inline void CheckErrorCode(std::error_code err, const char *file, const int line)
-{
- const std::string message = "Error returned at " + std::string(file) + ":" + std::to_string(line) +
- ", Error code: " + std::string(err.message());
-
- if (err != cvcore::make_error_code(cvcore::ErrorCode::SUCCESS))
- {
- throw std::runtime_error(message);
- }
-}
-
-} // namespace cvcore
-
-#define CHECK_ERROR(val) \
- { \
- cvcore::CheckCudaError((val), __FILE__, __LINE__); \
- }
-
-#define CHECK_ERROR_CODE(val) \
- { \
- cvcore::CheckErrorCode((val), __FILE__, __LINE__); \
- }
-
-#endif // CVCORE_CVERROR_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CameraModel.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CameraModel.h
deleted file mode 100644
index 157acf4..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/CameraModel.h
+++ /dev/null
@@ -1,292 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_CAMERAMODEL_H
-#define CVCORE_CAMERAMODEL_H
-
-#include
-
-#include "cv/core/Array.h"
-#include "cv/core/MathTypes.h"
-
-namespace cvcore {
-
-/**
- * An enum.
- * Enum type for Camera Distortion type.
- */
-enum class CameraDistortionType : uint8_t
-{
- UNKNOWN, /**< Unknown arbitrary distortion model. */
- NONE, /**< No distortion applied. */
- Polynomial, /**< Polynomial distortion model. */
- FisheyeEquidistant, /**< Equidistant Fisheye distortion model. */
- FisheyeEquisolid, /**< Equisolid Fisheye distortion model. */
- FisheyeOrthoGraphic, /**< Orthographic Fisheye distortion model. */
- FisheyeStereographic /**< Stereographic Fisheye distortion model. */
-};
-
-/**
- * Struct type used to store Camera Distortion model type and coefficients.
- */
-struct CameraDistortionModel
-{
- CameraDistortionType type; /**< Camera distortion model type. */
- union /**< Camera distortion model coefficients. */
- {
- float coefficients[8];
- struct
- {
- float k1, k2, k3, k4, k5, k6;
- float p1, p2;
- };
- };
-
- CameraDistortionModel()
- : type(CameraDistortionType::UNKNOWN),
- k1(0.0), k2(0.0), k3(0.0), k4(0.0), k5(0.0), k6(0.0),
- p1(0.0), p2(0.0) {}
-
- /**
- * Camera Distortion Model creation using array of coefficients.
- * @param distortionType Camera distortion model type
- * @param distortionCoefficients An array of camera distortion model coefficients
- * @return Camera Distortion Model
- */
- CameraDistortionModel(CameraDistortionType distortionType, std::array & distortionCoefficients)
- : type(distortionType)
- {
- std::copy(distortionCoefficients.begin(), distortionCoefficients.end(), std::begin(coefficients));
- }
-
- /**
- * Camera Distortion Model creation using individual coefficients.
- * @param distortionType Camera distortion model type
- * @param k1 Camera distortion model coefficient - k1
- * @param k2 Camera distortion model coefficient - k2
- * @param k3 Camera distortion model coefficient - k3
- * @param k4 Camera distortion model coefficient - k4
- * @param k5 Camera distortion model coefficient - k5
- * @param k6 Camera distortion model coefficient - k6
- * @param p1 Camera distortion model coefficient - p1
- * @param p2 Camera distortion model coefficient - p2
- * @return Camera Distortion Model
- */
- CameraDistortionModel(CameraDistortionType distortionType, float k1, float k2, float k3, \
- float k4, float k5, float k6, float p1, float p2)
- : type(distortionType)
- , k1(k1)
- , k2(k2)
- , k3(k3)
- , k4(k4)
- , k5(k5)
- , k6(k6)
- , p1(p1)
- , p2(p2)
- {
-
- }
-
- /**
- * Get camera distortion model type.
- * @return Camera distortion model type
- */
- CameraDistortionType getDistortionType() const
- {
- return type;
- }
-
- /**
- * Get camera distortion model coefficients.
- * @return Camera distortion model coefficients array
- */
- const float * getCoefficients() const
- {
- return &coefficients[0];
- }
-
- inline bool operator==(const CameraDistortionModel & other) const noexcept
- {
- return this->k1 == other.k1 &&
- this->k2 == other.k2 &&
- this->k3 == other.k3 &&
- this->k4 == other.k4 &&
- this->k5 == other.k5 &&
- this->k6 == other.k6 &&
- this->p1 == other.p1 &&
- this->p2 == other.p2;
- }
-
- inline bool operator!=(const CameraDistortionModel & other) const noexcept
- {
- return !(*this == other);
- }
-};
-
-/**
- * Struct type used to store Camera Intrinsics.
- */
-struct CameraIntrinsics
-{
- CameraIntrinsics() = default;
-
- /**
- * Camera Instrinsics creation with given intrinsics values
- * @param fx Camera axis x focal length in pixels
- * @param fy Camera axis y focal length in pixels
- * @param cx Camera axis x principal point in pixels
- * @param cy Camera axis y principal point in pixels
- * @param s Camera slanted pixel
- * @return Camera Intrinsics
- */
- CameraIntrinsics(float fx, float fy, float cx, float cy, float s = 0.0)
- {
- m_intrinsics[0][0] = fx;
- m_intrinsics[0][1] = s;
- m_intrinsics[0][2] = cx;
- m_intrinsics[1][0] = 0.0;
- m_intrinsics[1][1] = fy;
- m_intrinsics[1][2] = cy;
- }
-
- /**
- * Get camera intrinsics x focal length.
- * @return Camera x focal length
- */
- float fx() const
- {
- return m_intrinsics[0][0];
- }
-
- /**
- * Get camera intrinsics y focal length.
- * @return Camera y focal length
- */
- float fy() const
- {
- return m_intrinsics[1][1];
- }
-
- /**
- * Get camera intrinsics x principal point.
- * @return Camera x principal point
- */
- float cx() const
- {
- return m_intrinsics[0][2];
- }
-
- /**
- * Get camera intrinsics y principal point.
- * @return Camera y principal point
- */
- float cy() const
- {
- return m_intrinsics[1][2];
- }
-
- /**
- * Get camera intrinsics slanted pixels.
- * @return Camera slanted pixels
- */
- float skew() const
- {
- return m_intrinsics[0][1];
- }
-
- /**
- * Get camera intrinsics 2D array.
- * @return Camera intrisics array
- */
- const float * getMatrix23() const
- {
- return &m_intrinsics[0][0];
- }
-
- inline bool operator==(const CameraIntrinsics & other) const noexcept
- {
- return m_intrinsics[0][0] == other.m_intrinsics[0][0] &&
- m_intrinsics[0][1] == other.m_intrinsics[0][1] &&
- m_intrinsics[0][2] == other.m_intrinsics[0][2] &&
- m_intrinsics[1][0] == other.m_intrinsics[1][0] &&
- m_intrinsics[1][1] == other.m_intrinsics[1][1] &&
- m_intrinsics[1][2] == other.m_intrinsics[1][2];
- }
-
- inline bool operator!=(const CameraIntrinsics & other) const noexcept
- {
- return !(*this == other);
- }
-
- float m_intrinsics[2][3] {{1.0, 0.0, 0.0},{0.0, 1.0, 0.0}}; /**< Camera intrinsics 2D arrat. */
-};
-
-/**
- * Struct type used to store Camera Extrinsics.
- */
-struct CameraExtrinsics
-{
- using RawMatrixType = float[3][4];
-
- CameraExtrinsics() = default;
-
- /**
- * Camera Extrinsics creation with given extrinsics as raw 2D [3 x 4] array
- * @param extrinsics Camera extrinsics as raw 2D array
- * @return Camera Extrinsics
- */
- explicit CameraExtrinsics(const RawMatrixType & extrinsics)
- {
- std::copy(&extrinsics[0][0], &extrinsics[0][0] + 3 * 4, &m_extrinsics[0][0]);
- }
-
- inline bool operator==(const CameraExtrinsics & other) const noexcept
- {
- return m_extrinsics[0][0] == other.m_extrinsics[0][0] &&
- m_extrinsics[0][1] == other.m_extrinsics[0][1] &&
- m_extrinsics[0][2] == other.m_extrinsics[0][2] &&
- m_extrinsics[0][3] == other.m_extrinsics[0][3] &&
- m_extrinsics[1][0] == other.m_extrinsics[1][0] &&
- m_extrinsics[1][1] == other.m_extrinsics[1][1] &&
- m_extrinsics[1][2] == other.m_extrinsics[1][2] &&
- m_extrinsics[1][3] == other.m_extrinsics[1][3] &&
- m_extrinsics[2][0] == other.m_extrinsics[2][0] &&
- m_extrinsics[2][1] == other.m_extrinsics[2][1] &&
- m_extrinsics[2][2] == other.m_extrinsics[2][2] &&
- m_extrinsics[2][3] == other.m_extrinsics[2][3];
- }
-
- inline bool operator!=(const CameraExtrinsics & other) const noexcept
- {
- return !(*this == other);
- }
-
- RawMatrixType m_extrinsics {{1.0, 0.0, 0.0, 0.0},
- {0.0, 1.0, 0.0, 0.0},
- {0.0, 0.0, 1.0, 0.0}};
-};
-
-struct CameraModel
-{
- CameraIntrinsics intrinsic;
- CameraExtrinsics extrinsic;
- CameraDistortionModel distortion;
-};
-
-} // namespace cvcore
-
-#endif // CVCORE_CAMERAMODEL_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/ComputeEngine.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/ComputeEngine.h
deleted file mode 100644
index 65fe7ca..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/ComputeEngine.h
+++ /dev/null
@@ -1,43 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_COMPUTEENGINE_H
-#define CVCORE_COMPUTEENGINE_H
-
-#include
-
-namespace cvcore {
-
-enum class ComputeEngine : unsigned int
-{
- UNKNOWN = 0x00, // 0000_0000
-
- CPU = 0x01, // 0000_0001
- PVA = 0x02, // 0000_0010
- VIC = 0x04, // 0000_0100
- NVENC = 0x08, // 0000_1000
- GPU = 0x10, // 0001_0000
- DLA = 0x20, // 0010_0000
- DLA_CORE_0 = 0x40, // 0100_0000
- DLA_CORE_1 = 0x80, // 1000_0000
-
- COMPUTE_FAULT = 0xFF // 1111_1111
-};
-
-} // namespace cvcore
-
-#endif // CVCORE_COMPUTEENGINE_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Core.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Core.h
deleted file mode 100644
index 42732d9..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Core.h
+++ /dev/null
@@ -1,35 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CORE_H
-#define CORE_H
-
-namespace cvcore {
-
-// Enable dll imports/exports in case of windows support
-#ifdef _WIN32
-#ifdef CVCORE_EXPORT_SYMBOLS // Needs to be enabled in case of compiling dll
-#define CVCORE_API __declspec(dllexport) // Exports symbols when compiling the library.
-#else
-#define CVCORE_API __declspec(dllimport) // Imports the symbols when linked with library.
-#endif
-#else
-#define CVCORE_API
-#endif
-
-} // namespace cvcore
-#endif // CORE_H
diff --git a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Image.h b/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Image.h
deleted file mode 100644
index 263a699..0000000
--- a/isaac_ros_ess/gxf/ess/cvcore/include/cv/core/Image.h
+++ /dev/null
@@ -1,893 +0,0 @@
-// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
-// Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-#ifndef CVCORE_IMAGE_H
-#define CVCORE_IMAGE_H
-
-#include
-#include
-#include
-#include
-
-#include "Memory.h"
-#include "Tensor.h"
-
-namespace cvcore {
-
-/**
- * An enum.
- * Enum type for image type.
- */
-enum ImageType
-{
- Y_U8, /**< 8-bit unsigned gray. */
- Y_U16, /**< 16-bit unsigned gray. */
- Y_S8, /**< 8-bit signed gray. */
- Y_S16, /**< 16-bit signed gray. */
- Y_F16, /**< half normalized gray. */
- Y_F32, /**< float normalized gray. */
- RGB_U8, /**< 8-bit RGB. */
- RGB_U16, /**< 16-bit RGB. */
- RGB_F16, /**< half RGB. */
- RGB_F32, /**< float RGB. */
- BGR_U8, /**< 8-bit BGR. */
- BGR_U16, /**< 16-bit BGR. */
- BGR_F16, /**< half BGR. */
- BGR_F32, /**< float BGR. */
- RGBA_U8, /**< 8-bit RGBA. */
- RGBA_U16, /**< 16-bit RGBA. */
- RGBA_F16, /**< half RGBA. */
- RGBA_F32, /**< float RGBA. */
- PLANAR_RGB_U8, /**< 8-bit planar RGB. */
- PLANAR_RGB_U16, /**< 16-bit planar RGB. */
- PLANAR_RGB_F16, /**< half planar RGB. */
- PLANAR_RGB_F32, /**< float planar RGB. */
- PLANAR_BGR_U8, /**< 8-bit planar BGR. */
- PLANAR_BGR_U16, /**< 16-bit planar BGR. */
- PLANAR_BGR_F16, /**< half planar BGR. */
- PLANAR_BGR_F32, /**< float planar BGR. */
- PLANAR_RGBA_U8, /**< 8-bit planar RGBA. */
- PLANAR_RGBA_U16, /**< 16-bit planar RGBA. */
- PLANAR_RGBA_F16, /**< half planar RGBA. */
- PLANAR_RGBA_F32, /**< float planar RGBA. */
- NV12, /**< 8-bit planar Y + interleaved and subsampled (1/4 Y samples) UV. */
- NV24, /**< 8-bit planar Y + interleaved UV. */
-};
-
-/**
- * Struct type for image preprocessing params
- */
-struct ImagePreProcessingParams
-{
- ImageType imgType; /**< Input Image Type. */
- float pixelMean[3]; /**< Image Mean value offset for R,G,B channels. Default is 0.0f */
- float normalization[3]; /**< Scale or normalization values for R,G,B channels. Default is 1.0/255.0f */
- float stdDev[3]; /**< Standard deviation values for R,G,B channels. Default is 1.0f */
-};
-
-template
-struct IsCompositeImage : std::integral_constant
-{
-};
-
-template
-struct IsPlanarImage
- : std::integral_constant
-{
-};
-
-template
-struct IsInterleavedImage : std::integral_constant::value && !IsPlanarImage::value>
-{
-};
-
-/**
- * Image traits that map ImageType to TensorLayout, ChannelCount and ChannelType.
- */
-template
-struct ImageTraits;
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::S8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::S8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::S16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::S16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C1;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::HWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NHWC;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U8;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::U16;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::CHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-template<>
-struct ImageTraits
-{
- static constexpr TensorLayout TL = TensorLayout::NCHW;
- static constexpr ChannelCount CC = ChannelCount::C3;
- static constexpr ChannelType CT = ChannelType::F32;
-};
-
-/**
- * Get the bytes of each element for a specific ImageType.
- */
-inline size_t GetImageElementSize(const ImageType type)
-{
- size_t imageElementSize;
-
- switch (type)
- {
- case ImageType::Y_U8:
- case ImageType::Y_S8:
- case ImageType::RGB_U8:
- case ImageType::BGR_U8:
- case ImageType::RGBA_U8:
- case ImageType::PLANAR_RGB_U8:
- case ImageType::PLANAR_BGR_U8:
- case ImageType::PLANAR_RGBA_U8:
- {
- imageElementSize = 1;
- break;
- }
- case ImageType::Y_U16:
- case ImageType::Y_S16:
- case ImageType::RGB_U16:
- case ImageType::BGR_U16:
- case ImageType::RGBA_U16:
- case ImageType::PLANAR_RGB_U16:
- case ImageType::PLANAR_BGR_U16:
- case ImageType::PLANAR_RGBA_U16:
- case ImageType::Y_F16:
- case ImageType::RGB_F16:
- case ImageType::BGR_F16:
- case ImageType::RGBA_F16:
- case ImageType::PLANAR_RGB_F16:
- case ImageType::PLANAR_BGR_F16:
- case ImageType::PLANAR_RGBA_F16:
- {
- imageElementSize = 2;
- break;
- }
- case ImageType::Y_F32:
- case ImageType::RGB_F32:
- case ImageType::BGR_F32:
- case ImageType::RGBA_F32:
- case ImageType::PLANAR_RGB_F32:
- case ImageType::PLANAR_BGR_F32:
- case ImageType::PLANAR_RGBA_F32:
- {
- imageElementSize = 4;
- break;
- }
- default:
- {
- imageElementSize = 0;
- }
- }
-
- return imageElementSize;
-}
-
-/**
- * Get the number of channels for a specific ImageType.
- */
-inline size_t GetImageChannelCount(const ImageType type)
-{
- size_t imageChannelCount;
-
- switch (type)
- {
- case ImageType::Y_U8:
- case ImageType::Y_U16:
- case ImageType::Y_S8:
- case ImageType::Y_S16:
- case ImageType::Y_F16:
- case ImageType::Y_F32:
- {
- imageChannelCount = 1;
- break;
- }
- case ImageType::RGB_U8:
- case ImageType::RGB_U16:
- case ImageType::RGB_F16:
- case ImageType::RGB_F32:
- case ImageType::BGR_U8:
- case ImageType::BGR_U16:
- case ImageType::BGR_F16:
- case ImageType::BGR_F32:
- case ImageType::PLANAR_RGB_U8:
- case ImageType::PLANAR_RGB_U16:
- case ImageType::PLANAR_RGB_F16:
- case ImageType::PLANAR_RGB_F32:
- case ImageType::PLANAR_BGR_U8:
- case ImageType::PLANAR_BGR_U16:
- case ImageType::PLANAR_BGR_F16:
- case ImageType::PLANAR_BGR_F32:
- {
- imageChannelCount = 3;
- break;
- }
- case ImageType::RGBA_U8:
- case ImageType::RGBA_U16:
- case ImageType::RGBA_F16:
- case ImageType::RGBA_F32:
- case ImageType::PLANAR_RGBA_U8:
- case ImageType::PLANAR_RGBA_U16:
- case ImageType::PLANAR_RGBA_F16:
- case ImageType::PLANAR_RGBA_F32:
- {
- imageChannelCount = 4;
- break;
- }
- default:
- {
- imageChannelCount = 0;
- }
- }
-
- return imageChannelCount;
-};
-
-template
-class Image
-{
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor
-{
- using Tensor::Tensor;
-};
-
-template<>
-class Image : public Tensor