Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VP run time #38

Open
DrVijayK opened this issue Jun 14, 2019 · 3 comments
Open

VP run time #38

DrVijayK opened this issue Jun 14, 2019 · 3 comments

Comments

@DrVijayK
Copy link

DrVijayK commented Jun 14, 2019

I observed that nv_full INT8 inference on VP is taking more time than with FP16 inference.
NVDLA HW branch: nvdlav1, config: nv_full
NVDLA SW branch: Latest with INT8 option in nvdla_compiler

Please let me know whether this is expected? And whether this will happen when running on HW in say FPGA.

@guodonglai
Copy link

Hi, I was doing the same nv_full testing on vp but got some error code as shown in: nvdla/sw#143. Could you provide some details in how you managed to run nv_full on INT8? Which loadable and image are you using?

@DrVijayK
Copy link
Author

follow thread nvdla/sw#140.

@Hassan313
Copy link

@DrVijayK I am having the same problem. Have you found the answer to your question?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants