-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is the evalset_ and the step of evaluate(epoch); necessary if we dont care the loss rate for the online envitoment ? #21
Comments
It seems that if u don't care the metrics (since we already care a lot of those all metrics at test time), |
Any suggestions for that? thanks very much |
Hi, Typically the averaged test metrics are slower to compute, so this shouldn't be a bottleneck, especially when parallelized, but you can definitely comment it out if it slows things down for you. @chtran @JelleZijlstra can make the call on whether it's worth adding gflags for this. -Alberto |
Thanks very much Alberto. And the thing that I want to do is just remove the initializing the evalSet(not the test set) on the production environment ( as I don’t need to watch the metrics for the production results). And as I review the code, it seemed that it would have no effect for the final result (removing the evalset). I am not clear whether that is the truth (since if i just remove the evalset and the metrics of the evalset, it would save about half of the whole time) Thanks very much |
And it would be better if you provide gflags to choose whether use evalset or not, Thanks veny much Alberto. |
And what's more, it would save a lot of memory |
is the evalset_ and the step of
evaluate(epoch);
necessary if we dont care the loss rate for the online enviroment ?
for without evalset, we can save a lot of time
The text was updated successfully, but these errors were encountered: