Add TensorRT export support and inference-models inference integration#853
Add TensorRT export support and inference-models inference integration#853
Conversation
Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> Agent-Logs-Url: https://github.com/roboflow/rf-detr/sessions/e11f7493-6da1-4cc3-a0e3-a63236b41b37
… and tests Co-authored-by: Borda <6035284+Borda@users.noreply.github.com> Agent-Logs-Url: https://github.com/roboflow/rf-detr/sessions/e11f7493-6da1-4cc3-a0e3-a63236b41b37
Codecov Report❌ Patch coverage is ❌ Your patch check has failed because the patch coverage (65%) is below the target coverage (95%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## develop #853 +/- ##
=======================================
- Coverage 76% 71% -4%
=======================================
Files 94 94
Lines 7269 7348 +79
=======================================
- Hits 5502 5235 -267
- Misses 1767 2113 +346 🚀 New features to boost your workflow:
|
|
@copilot pivot it to set up integration with |
Agent-Logs-Url: https://github.com/roboflow/rf-detr/sessions/e30db6fb-272b-465a-938d-afb83cb0827a Co-authored-by: Borda <6035284+Borda@users.noreply.github.com>
Done in 696747f. Removed the custom The docs now show:
The Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
|
@copilot resolve merge conflict or rebase on the develop branch with just relevant chnages |
Agent-Logs-Url: https://github.com/roboflow/rf-detr/sessions/391219cf-2e3d-4c22-9741-5fd9fc359665 Co-authored-by: Borda <6035284+Borda@users.noreply.github.com>
Done in b3099a7 — rebased onto |
…rt.py Co-authored-by: Borda <6035284+Borda@users.noreply.github.com>
RF-DETR had no built-in path to TensorRT export or multi-backend inference. This PR adds a one-step TensorRT export flag and documents
inference-modelsas the recommended inference library.Changes
RFDETR.export(tensorrt=True)— chains ONNX →trtexecconversion in a single call, producing bothoutput/inference_model.onnxandoutput/inference_model.enginetrtexec()return value fix — was returningNone; now returns the engine path (required for the export chaining above)inference-modelsintegration —docs/learn/export.mdnow directs users toinference-modelsfor multi-backend inference (PyTorch, ONNX, TensorRT) with automatic backend selectionUsage
Original prompt
⚡ Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast.