You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for uploading the code for this research paper.
I am successfully able to run the demo code for nyu, however the output is a single depth image and same goes for demo_uncalibrated script where the entire video is provided as input.
Shouldn't the output be multiple depth maps for different video frames or something similar as written in the paper?
The text was updated successfully, but these errors were encountered:
If you run with the --mode=global the depth maps for all frames will be predicted. You can also try the SLAM demos for predicting depth for longer sequences
Hi @zachteed,
Thanks for the reply. I ran the --mode=global, however I couldn't find all the depth maps for the frames in the output folder. There is only a single depth.png being created irrespective of what mode i run.
Also, for uncalibrated videos is the same and is it possible to provide stereo video?
Where would all the output depthmaps be stored?
@zachteed Hi, I found that it requires the intrinsics of cameras to run the SLAM demo. Would it work without intrinsics? Just like the demo_uncalibrated.py?
Hi,
Thanks for uploading the code for this research paper.
I am successfully able to run the demo code for nyu, however the output is a single depth image and same goes for demo_uncalibrated script where the entire video is provided as input.
Shouldn't the output be multiple depth maps for different video frames or something similar as written in the paper?
The text was updated successfully, but these errors were encountered: