-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU support #31
Comments
as I've had Cuda 10 installed, as per HaskTorch's readme this is just a matter of |
I issued the command. this involves only multiple complex technologies. what could possibly go wrong?! |
it built. I feel so confused. |
adding a simple test:
(or even just running
edit: same error on GCE.
|
if the |
HaskTorch source mentions version I now asked in Slack:
Junji Hashimoto:
That compute capability could be it.
And unlike for Windows, obviously there doesn't seem to be an easy CLI way to confirm that. ugh. I could maybe try LibTorch without HaskTorch as Junji suggested, but this actually sounds like a plausible explanation, seeing as my drivers should be recent enough (?). Alternatively, I could retry this later on Google Cloud or something. |
GCE's Tesla T4 has compute capability 7.5, so def >= libtorch-1.4's required 3.5. in PyTorch CUDA works fine too:
|
on Slack Junji mentioned exposing GPU stuff thru Nix using
prepending the |
will this same strategy work for me locally?! edit: yes! |
run-time crap on all LSTM calls using CUDA suddenly emit the following error:
that after some R3NN call (
this didn't happen when everything was just on CPU. basically, GPU still seems buggy? |
just retried now. seems to be working! just 10% faster tho against 4x the cost. |
with deep learning libraries working, the next step would be to ensure I can use GPUs with them as well for performance.
The text was updated successfully, but these errors were encountered: