We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic_fixed_point-16量化是fp16量化么?我用这个方式量化模型之后,使用Linux目录下的ssd_mobilenet.cpp来测试耗时,uint8 mobilenetv2的耗时是6ms,fp16 mobilenetv2的耗时是50ms+,这个量级的差异太大了,是fp16模型在预测时要有不同的参数设置么?
The text was updated successfully, but these errors were encountered:
dynamic_fixed_point-16是int16吧
Sorry, something went wrong.
我测试的 rknn_mobilenet 耗时 在20ms 请问,你是怎么做到6ms的. 我难道没有调用到NPU么?
No branches or pull requests
dynamic_fixed_point-16量化是fp16量化么?我用这个方式量化模型之后,使用Linux目录下的ssd_mobilenet.cpp来测试耗时,uint8 mobilenetv2的耗时是6ms,fp16 mobilenetv2的耗时是50ms+,这个量级的差异太大了,是fp16模型在预测时要有不同的参数设置么?
The text was updated successfully, but these errors were encountered: