Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

predict call for regression #100

Open
pat-alt opened this issue Jul 14, 2024 · 1 comment
Open

predict call for regression #100

pat-alt opened this issue Jul 14, 2024 · 1 comment

Comments

@pat-alt
Copy link
Member

pat-alt commented Jul 14, 2024

Double-check if it's reasonable to return glm predictive when calling predict on regression objects (in line with torch convention AFAIK) or if we should instead incorporate observational noise (typically shown on plots)

I'll have a look at this next week but cc @Rockdeldiablo feel free to continue discussion from teams below

@pasq-cat
Copy link
Member

pasq-cat commented Jul 14, 2024

mah. if i had a black box and i wanted to have an idea of where it is uncertain i would not omit the contribution of either epistemic and aleatoric uncertainty because they signal 2 different things: the first where the nn lack data ( and a researcher may be interested to know it), the second is the intrinsic stochasticity of the measurements. For example, if you had a gap in the data right in the middle , with only the aleatoric uncertainty the nn will give overconfident predictions to the user and in that case the "trustworthyness" is lost. In instead i use only epistemic uncertainty i will have overconfident measures where there are a lot of data points. the plot is good for humans but if the neural network has to be integrated in a IoT device or a pipeline, the results have to be reported numerically by the predict function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants