You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like you are plotting the translation component of the camera poses. The poses tell how 3D points get mapped to the image. To get the world coordinates of the camera, you need to invert the poses to convert them to c2w format. Also, DeepV2D only estimates depth up to a scale factor, so you will also need to scale the trajectory.
You can get a scale correction by doing scale = np.sum(gtruth_xyz * pred_xyz)/np.sum(pred_xyz ** 2)
if gtruth_xyz and pred_xyz are the (x, y, z) predicted/gt coordinates of the camera over the full trajectory.
However, right now slam.poses only stores the poses for the keyframes. To evaluate rpe, you will need poses for all the frames.
Hi, thank you for your nice work.
I'm wondering how you get the results from the paper.
I ran the code by
Extract the poses from
slam.poses
.Then, I use evo_rpe for evaluation.
But the metrics from evo_rpe is
The aligned trajectory also doesn't look right.
May I ask if there's some conversion I missed?
Thank you.
The text was updated successfully, but these errors were encountered: