You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that the current implementation calculates metrics (like EPE) in this way:
# In validation loop:epe=F.l1_loss(gt_disp[mask], pred_disp[mask], reduction='mean') # mean per iterationval_epe+=epe.item()
# After loop:mean_epe=val_epe/valid_samples
This effectively computes: (EPE1/count1 + EPE2/count2 + ... + EPEn/countn) / n
This approach might be problematic when the number of valid pixels varies across iterations, as it gives equal weight to each iteration's mean regardless of how many valid pixels contributed to that mean.
I think it might be more accurate to use:
# In validation loop:epe=F.l1_loss(gt_disp[mask], pred_disp[mask], reduction='sum')
val_epe+=epe.item()
total_valid_pixels+=mask.sum().item()
# After loop:mean_epe=val_epe/total_valid_pixels
If our goal is to evaluate "average performance per image" (giving equal weight to each sample regardless of its number of valid pixels), then Method 1 is the correct approach. This makes sense in scenarios where we care about the model's average performance on individual samples rather than individual pixels.
Hi, I noticed that the current implementation calculates metrics (like EPE) in this way:
This effectively computes:
(EPE1/count1 + EPE2/count2 + ... + EPEn/countn) / n
This approach might be problematic when the number of valid pixels varies across iterations, as it gives equal weight to each iteration's mean regardless of how many valid pixels contributed to that mean.
I think it might be more accurate to use:
Which computes:
(EPE1 + EPE2 + ... + EPEn) / (count1 + count2 + ... + countn)
Could you clarify which approach is mathematically correct for computing the overall metric? Thanks!
The text was updated successfully, but these errors were encountered: