Skip to content

feat(wrapper): add gpu support in containers#254

Open
Fastiraz wants to merge 1 commit intoThePorgs:devfrom
Fastiraz:dev
Open

feat(wrapper): add gpu support in containers#254
Fastiraz wants to merge 1 commit intoThePorgs:devfrom
Fastiraz:dev

Conversation

@Fastiraz
Copy link

Description

Add GPU support using the torch Python module and the device_requests (--gpus in CLI) Docker argument.
To do this I implemented the isGPUAvailable function which checks for GPU availability and returns the appropriate value for Docker's --gpus argument. Finally, I added docker_args["device_requests"] to enable GPU support when creating a container.

Point of attention

I've also imported the numpy Python module but it's not required.
I imported it because of a warning message that appears when creating a new container with Exegol.

No module named numpy...

It works fine even without numpy, the error is just displayed when the module is missing.
For now, I haven't figured out how to solve this properly.

I found a way to suppress the warning using warning filters, so it doesn't print the message. Let me know if you want me to implement it this way.

Here's an example:

import warnings
warnings.filterwarnings("ignore", message="No module named numpy", category=ImportWarning)

image

Comment on lines +8 to +9
torch~=2.6.0
numpy~=2.2.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you check for GPU without a installing a 1Go torch dependency ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes but only with some hacks.

Instead of checking properly with a Python module, I can just execute nvidia-smi on a subprocess and check if there is any standard output or standard error.
If there is a standard output, that mean NVIDIA drivers are installed.
Otherwise, if there is a standard error, that mean NVIDIA drivers are not installed.

For macOS, I just have to check to platform and the CPU architecture.
If the platform is macOS and the CPU architecture is ARM, we determine that the "GPU" is MPS.

I've already a version like that and it works well on Linux.
Keep in mind that I cannot test it with AMD GPUs and Linux or Windows operating systems.

If you think a version like this is better, I can make a new commit with the new function.

Here's the new function:

def isGPUAvailable(self) -> Optional[str]:
      import platform, subprocess
      from typing import Optional
      system = platform.system().lower()

      try:
          gpu_names = subprocess.run(
              ['nvidia-smi'],
              stdout=subprocess.PIPE,
              stderr=subprocess.DEVNULL,
              check=True
          ).stdout.decode().strip().split('\n')
          return '"device=0"' if len(gpu_names) == 1 else '"all"'
      except (subprocess.CalledProcessError, FileNotFoundError):
          pass

      if system == "darwin" and platform.machine().startswith("arm"):
          return None

      return None

@mvthul
Copy link

mvthul commented Jun 13, 2025

Oehhh i would like it!!!

@Macbucheron1
Copy link

I am also interested in the merge of this PR

Macbucheron1 added a commit to Macbucheron1/mac-nixos that referenced this pull request Sep 6, 2025
@romainpanno
Copy link

Could be usefull for some gpu calculations, hashcat for example.
I'm also interested in the merge of this PR !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants