-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About ScanNet #44
Comments
I have one more question about ScanNet. DeepV2D/data/scannet/scannet_test.txt Line 1 in eb362f2
It seems that I cannot find how the authors extracted image sequences from ScanNet dataset. For instance, in another paper, So I wonder how the authors extract images and depths from the original ScanNet v2. |
I have one more question.
Currently, I cannot reproduce the reported results (Table2 of the main paper) |
Hi, I used the split used in the BA-Net paper in order to compare to BA-Net. The images/depths/poses were extracted from the .sens file with frame skip = 1. I evaluated the depth/pose accuracy of DeepV2D on samples. For DSO, I only reported the results on the videos where DSO succeeds. Which results in Table 2 are you having trouble reproducing, and what results are you getting? Are you using the pretrained model or running the training script? |
I think I currently got stuck with the sub-set that DSO succeeds. |
I will post a .txt file on the cases where DSO succeeds later today or tomorrow. I have the logs from this experiment arxived, but I will need to parse these logs to give you the exact cases. The evaluation used by BA-Net is performed on pairs of frames, but by default DSO only outputs the pose of keyframes. I needed to use a modified version of DSO to ensure that poses for all frames where recorded. I ran DSO on the full sequences and recorded camera poses for all frames, missing poses indicated a tracking failure, so I only evaluated pairs of frames with results from DSO. |
These are the poses I got from running dso You can use this script to parse the results, you should find that dso has the poses for 1665 of the 2000 pairs
|
Thanks for sharing the wonderful work.
I have a question for the usage of the scenes in the ScanNet dataset.
While ScanNet itself provides train/val/test splits, it seems like this paper utilized specific scenes as below.
DeepV2D/data/scannet/scannet_test.txt
Line 1 in eb362f2
I want to double-check whether I correctly understand the author's intentions.
The text was updated successfully, but these errors were encountered: