Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Memory Occupation #25

Open
rruixxu opened this issue Dec 28, 2020 · 1 comment
Open

GPU Memory Occupation #25

rruixxu opened this issue Dec 28, 2020 · 1 comment

Comments

@rruixxu
Copy link

rruixxu commented Dec 28, 2020

I am currently adapting your code to my own project, but I found the GPU memory occupation is exceedingly large. I tested the memory usage of lovasz softmax, which showed it uses nearly 2000Mb with batch size 1. I don't think it is normal. Have you checked the reasons behind the weirdly large GPU memory usage? Could you point out which part of the network that occupies memory most besides lovasz softmax?

@edwardzhou130
Copy link
Owner

I think GPU memory consumption mostly comes from two places:

  1. the raw point cloud input. We use the unprojected point cloud as the input. In the SemanticKITTI case, it has more than 100000 points.
  2. the grid size. The output size of our model is like (BxCxHxWxZ) which can be huge especially after the backward for our default setting --grid_size 480 360 32. Changing it to --grid_size 320 240 32 or even smaller will help a lot if GPU memory is the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants