Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory #258

Closed
hitmusic100 opened this issue Aug 28, 2022 · 3 comments
Closed

RuntimeError: CUDA out of memory #258

hitmusic100 opened this issue Aug 28, 2022 · 3 comments

Comments

@hitmusic100
Copy link

hitmusic100 commented Aug 28, 2022

I have colab pro, but keep getting this message using 5b_lyrics:

RuntimeError: CUDA out of memory. Tried to allocate 1.39 GiB (GPU 0; 14.76 GiB total capacity; 12.25 GiB already allocated; 1.08 GiB free; 12.61 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

but cannot find how to fix the error anywhere

Would you know how to fix this error (without switching from 5b_lyrics to 1b_lyrics) - thanks

Am thinking this might be an elephant in the room, so have created this new issue

referencing the same issue here (#145 (comment))

@hitmusic100
Copy link
Author

hps.n_samples = 1 if model in ('5b', '5b_lyrics') else 8
max_batch_size = 1 if model in ('5b', '5b_lyrics') else 16

Have set the above variables to 1, as an experiment, as per https://github.com/openai/jukebox/blob/master/README.md

@hitmusic100
Copy link
Author

colab needs to allocate a tesla P100 - solved

@impactcolor
Copy link

@hitmusic100 hi how did you get Colab to allocate a tesla p100? Or was there a change you made to the code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants