-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installed with cuda support but doesn't detect gpu runtime #1421
Comments
Let's start with verifying that your PyTorch installation is working on GPU correctly. Can you share the output of the following? We should hopefully see that PyTorch is built with CUDA support, but I'm expecting it may be missing here. If that is the case then I would suggest to try reinstalling PyTorch. |
Hello author, why did I install it according to your bitsandbytes==0.42.0, but the runtime shows that bitsandbytes does not support CUDA, when running python -c 'import torch; print(torch.config.show())' and my CUDA status is ON and it returns as ture. |
@matthewdouglas I found the problem. First, as you mentioned, I was not on a GPU node. On the cluster system that I use, I was running those commands on the login node. Although the cuda modules were loaded, the correct output has to be seen on a GPU node running the driver. So here is the output:
I have installed bitsandbytes via conda, so I only see 12.0 library file:
Based on my searches, there is no conda package for cuda 12.1 which I have loaded its module. So, I guess, I have to compile from source. |
OK I installed from source for the cuda version on the gpu node and it is now fine. Thank you. |
I have installed bitsandbytes with cuda support, but I receive an error message that bitsandbytes are not compiled for GPU. I am pretty much confused with the following output. While it detects cuda runtime, it complains about not having gpu support.
How can I fix that?
The text was updated successfully, but these errors were encountered: