Replies: 1 comment 1 reply
-
@mtli batch size 1 latency will likely never be comparable without some significant work or new optimization features in PyTorch. I think around batch 16-32 and higher they are comparable. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, thank you for writing this nice PyTorch implementation of EfficientDet! The official repo says D0's end-to-end latency (CNN + pre- and post-processing) is 10ms without TensorRT. I wonder if your repo can produce similar results?
Beta Was this translation helpful? Give feedback.
All reactions