You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I observed that nv_full INT8 inference on VP is taking more time than with FP16 inference.
NVDLA HW branch: nvdlav1, config: nv_full
NVDLA SW branch: Latest with INT8 option in nvdla_compiler
Please let me know whether this is expected? And whether this will happen when running on HW in say FPGA.
The text was updated successfully, but these errors were encountered:
Hi, I was doing the same nv_full testing on vp but got some error code as shown in: nvdla/sw#143. Could you provide some details in how you managed to run nv_full on INT8? Which loadable and image are you using?
I observed that nv_full INT8 inference on VP is taking more time than with FP16 inference.
NVDLA HW branch: nvdlav1, config: nv_full
NVDLA SW branch: Latest with INT8 option in nvdla_compiler
Please let me know whether this is expected? And whether this will happen when running on HW in say FPGA.
The text was updated successfully, but these errors were encountered: