Skip to content

Options for GPU Sharing between Containers Running on a Workstation #1769

Open
@frenchwr

Description

@frenchwr

Describe the support request
Hello, I'm trying to understand options that would allow multiple containers to share a single GPU.

I see that K8s device plugins in general are not meant to allow a device to be shared between containers.

I also see from the GPU plugin docs in this repo that there is a sharedDevNum that can be used for sharing a GPU, but I infer this is partitioning the resources on the GPU so each container is only allocated a fraction of the GPU's resources. Is that correct?

My use case is a tool called data-science-stack that is being built to automate the deployment/management of GPU-enabled containers for quick AIML experimentation on a user's laptop or workstation. In this scenario we'd prefer the containers have the ability to each have access to the full GPU resources - much like you'd expect for applications running directly on the host. Is this possible?

System (please complete the following information if applicable):

  • OS version: Ubuntu 22.04
  • Kernel version: Linux 5.15 (HWE kernel for some newer devices)
  • Device plugins version: v0.29.0 and v0.30.0 are the versions I've worked with
  • Hardware info: iGPU and dGPU

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions