Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with reconstruction result #63

Open
lilly5791 opened this issue Jul 29, 2022 · 10 comments
Open

Problem with reconstruction result #63

lilly5791 opened this issue Jul 29, 2022 · 10 comments

Comments

@lilly5791
Copy link

Thank you for sharing such a great deep learning slam algorithm.
I also thank you for the recently modified code which outputs the numpy pose results on reconstruction_path.

It works very well on my own data and the demo result is almost same with the groundtruth!
However, I found a problem when I visualize the npy result with Matlab

As you can see in picture,
sfm_bench and rgbd_dataset_freiburg3_cabinet trajectory visualization result is same with demo visualization.
However my own data trajectory visualization result and the demo visualization is very different.

In my own data, the demo result is very good, but I don't understand why the matlab visualization result is bad even though the matlab code is same.
Do you know why the npy result is different from the demo?

droid_slam_visualization

@buenos-dan
Copy link

buenos-dan commented Aug 2, 2022

Yes, I have the same problem with you.
traj

Would you give me a contact method(WeChat or E-mail) therefore we can communicate more details.

@lilly5791
Copy link
Author

Here's my email
[email protected]

The movement is different from yours and mine but the droid slam result looks similar especially the round and noisy part.

@buenos-dan
Copy link

Try to save data directly from visualization.py and the result is good. Why reconstruction is bad? I guess poses were optimized after, so you should save the data in time and update data with dirty_index. PS: don't save tensor directly, use tensor.item() to save the value.

@lilly5791
Copy link
Author

Thank you for your advice.
This worked well!

@pranav-asthana
Copy link

Hi can you please explain what you mean by save the data in time and update data with dirty_index?

@lilly5791
Copy link
Author

I think it's because the type of video.poses is torch.tensor and the pose information keeps changing. I'm not used to torch so I just saved all ix and pose as txt file and only used last lines for visualization.

@senhuangpku
Copy link

Hello, I have no idea about how to use the result of reconstruction_path, could you tell me your solution? Thanks!

@pranav-asthana
Copy link

This is what I am using. Just leaving this here in case it helps anyone.

  1. images, poses and depths are output for keyframes into reconstruction_path. These can be used with a reconstruction algorithm (like tsdf, a good implementation is in Open3D) or any other MVS system.
  2. In demo.py, traj_est stores the pose for each frame of the input video after global refinement and trajectory filling. If you need this information, you can save it to allow doing things like MVS on each input frame.

@RitvikMandyam
Copy link

This is what I am using. Just leaving this here in case it helps anyone.

  1. images, poses and depths are output for keyframes into reconstruction_path. These can be used with a reconstruction algorithm (like tsdf, a good implementation is in Open3D) or any other MVS system.
  2. In demo.py, traj_est stores the pose for each frame of the input video after global refinement and trajectory filling. If you need this information, you can save it to allow doing things like MVS on each input frame.

Could you share the code to do this, please? I'm working on a college project that uses DROID-SLAM, and I'd really like to show surfaces for my demo rather than a point cloud. I'm able to get pretty decent output by increasing filter_threshold in droid_slam/visualization.py, but it still looks terrible when you zoom into any part of the visualization.

@surajiitd
Copy link

surajiitd commented Jun 11, 2023

@pranav-asthana Do you know why the reconstructions are saved before global-BA ? they should be saved after global-BA as the poses and the depths will be refined globally in global-BA

DROID-SLAM/demo.py

Lines 131 to 134 in 8016d2b

if args.reconstruction_path is not None:
save_reconstruction(droid, args.reconstruction_path)
traj_est = droid.terminate(image_stream(args.imagedir, args.calib, args.stride))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants