Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output of Demo scripts #25

Open
aakash26 opened this issue Sep 2, 2020 · 3 comments
Open

Output of Demo scripts #25

aakash26 opened this issue Sep 2, 2020 · 3 comments

Comments

@aakash26
Copy link

aakash26 commented Sep 2, 2020

Hi,
Thanks for uploading the code for this research paper.
I am successfully able to run the demo code for nyu, however the output is a single depth image and same goes for demo_uncalibrated script where the entire video is provided as input.
Shouldn't the output be multiple depth maps for different video frames or something similar as written in the paper?

@zachteed
Copy link
Collaborator

zachteed commented Sep 4, 2020

If you run with the --mode=global the depth maps for all frames will be predicted. You can also try the SLAM demos for predicting depth for longer sequences

@aakash26
Copy link
Author

aakash26 commented Sep 6, 2020

Hi @zachteed,
Thanks for the reply. I ran the --mode=global, however I couldn't find all the depth maps for the frames in the output folder. There is only a single depth.png being created irrespective of what mode i run.
Also, for uncalibrated videos is the same and is it possible to provide stereo video?
Where would all the output depthmaps be stored?

Regards
Aakash

@Tord-Zhang
Copy link

@zachteed Hi, I found that it requires the intrinsics of cameras to run the SLAM demo. Would it work without intrinsics? Just like the demo_uncalibrated.py?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants