-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with reconstruction result #63
Comments
Here's my email The movement is different from yours and mine but the droid slam result looks similar especially the round and noisy part. |
Try to save data directly from visualization.py and the result is good. Why reconstruction is bad? I guess poses were optimized after, so you should save the data in time and update data with dirty_index. PS: don't save tensor directly, use tensor.item() to save the value. |
Thank you for your advice. |
Hi can you please explain what you mean by save the data in time and update data with dirty_index? |
I think it's because the type of video.poses is torch.tensor and the pose information keeps changing. I'm not used to torch so I just saved all ix and pose as txt file and only used last lines for visualization. |
Hello, I have no idea about how to use the result of reconstruction_path, could you tell me your solution? Thanks! |
This is what I am using. Just leaving this here in case it helps anyone.
|
Could you share the code to do this, please? I'm working on a college project that uses DROID-SLAM, and I'd really like to show surfaces for my demo rather than a point cloud. I'm able to get pretty decent output by increasing filter_threshold in droid_slam/visualization.py, but it still looks terrible when you zoom into any part of the visualization. |
@pranav-asthana Do you know why the reconstructions are saved before global-BA ? they should be saved after global-BA as the poses and the depths will be refined globally in global-BA Lines 131 to 134 in 8016d2b
|
Thank you for sharing such a great deep learning slam algorithm.
I also thank you for the recently modified code which outputs the numpy pose results on reconstruction_path.
It works very well on my own data and the demo result is almost same with the groundtruth!
However, I found a problem when I visualize the npy result with Matlab
As you can see in picture,
sfm_bench and rgbd_dataset_freiburg3_cabinet trajectory visualization result is same with demo visualization.
However my own data trajectory visualization result and the demo visualization is very different.
In my own data, the demo result is very good, but I don't understand why the matlab visualization result is bad even though the matlab code is same.
Do you know why the npy result is different from the demo?
The text was updated successfully, but these errors were encountered: