You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Double-check if it's reasonable to return glm predictive when calling predict on regression objects (in line with torch convention AFAIK) or if we should instead incorporate observational noise (typically shown on plots)
I'll have a look at this next week but cc @Rockdeldiablo feel free to continue discussion from teams below
The text was updated successfully, but these errors were encountered:
mah. if i had a black box and i wanted to have an idea of where it is uncertain i would not omit the contribution of either epistemic and aleatoric uncertainty because they signal 2 different things: the first where the nn lack data ( and a researcher may be interested to know it), the second is the intrinsic stochasticity of the measurements. For example, if you had a gap in the data right in the middle , with only the aleatoric uncertainty the nn will give overconfident predictions to the user and in that case the "trustworthyness" is lost. In instead i use only epistemic uncertainty i will have overconfident measures where there are a lot of data points. the plot is good for humans but if the neural network has to be integrated in a IoT device or a pipeline, the results have to be reported numerically by the predict function.
Double-check if it's reasonable to return glm predictive when calling predict on regression objects (in line with torch convention AFAIK) or if we should instead incorporate observational noise (typically shown on plots)
I'll have a look at this next week but cc @Rockdeldiablo feel free to continue discussion from teams below
The text was updated successfully, but these errors were encountered: