You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the DEPTH_SCALE (which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?
The text was updated successfully, but these errors were encountered:
The depth computation for the KITTI dataset:
RAFT-3D/scripts/kitti_submission.py
Lines 74 to 75 in 877eb80
is somehow missing the baseline (0.54 according to http://www.cvlibs.net/datasets/kitti/setup.php).
You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the
DEPTH_SCALE
(which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?The text was updated successfully, but these errors were encountered: