Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does not use distributed training #80

Open
wujinhu1999 opened this issue Aug 30, 2023 · 2 comments
Open

Does not use distributed training #80

wujinhu1999 opened this issue Aug 30, 2023 · 2 comments

Comments

@wujinhu1999
Copy link

Hello author, I would like to ask if I do not use distributed training and only use one GPU for training, will it affect the results?

@niujinshuchong
Copy link
Member

Hi, we use a single gpu for training for all the experiments in the paper except the tanks and temples dataset with high resolution cues (C.3 in the paper). But in general I think using large batch size (more gpus) will give you better results. In simple scene like dtu or replica, the difference will not be significant.

@wujinhu1999
Copy link
Author

Thank you for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants