[Feature] Added agnostic_nms onnx converter for tensorrt 8 #1052
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Please describe the motivation for this PR and the goal you want to achieve through this PR.
I wanted my onnx model to convert with agnostic_nms algorithm, which tensorrt >= 8.6 supports and tested it works. You can test it with '--agnostic-nms' flag with 'export_onnx.py'. The feature is on official tensorrt and I just changed it to use the plugin during onnx convert. You can check the tensorrt feature below.
https://github.com/NVIDIA/TensorRT/blob/release/8.6/plugin/efficientNMSPlugin/EfficientNMSPlugin_PluginConfig.yaml
Modification
Please briefly describe what modification is made in this PR.
Before: It didn't support agnostic_nms with tensorrt8.
After: It supports agnostic_nms with tensorrt >= 8.6.
You can see the difference below. The first one is before agnostic_nms which means if model considers it has two classes, returns two bboxes for one object. The second one is after agnostic_nms which gives just one bbox that has the highest confidence score.
BEFORE
before-agnostic-nms.mp4
AFTER
after-agnostic-nms.mp4
Both of them served with NVIDIA Deepstream.
BC-breaking (Optional)
Does the modification introduce changes that break the backward compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
Checklist