-
Notifications
You must be signed in to change notification settings - Fork 25
Adding support for G4 nvidia-rtx-6000 GPUs for vLLM inference-ref-arch #345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
syeda-anjum
commented
Dec 17, 2025
- Updated TF for GPU node-pool
- Added new custom compute classes for G4 GPU node-pool
- Updated GPU G4 vllm deployment and kustomize scripts
- Updated READ.ME
docs/platforms/gke/base/use-cases/inference-ref-arch/online-inference-gpu/vllm-with-hf-model.md
Outdated
Show resolved
Hide resolved
docs/platforms/gke/base/use-cases/inference-ref-arch/online-inference-gpu/vllm-with-hf-model.md
Outdated
Show resolved
Hide resolved
...s/inference-ref-arch/kubernetes-manifests/online-inference-gpu/vllm/g4-qwen3-32b/runtime.env
Outdated
Show resolved
Hide resolved
...forms/gke/base/core/container_node_pool/gpu/region/us-central1/container_node_pool_gpu_g4.tf
Outdated
Show resolved
Hide resolved
...forms/gke/base/core/container_node_pool/gpu/region/us-central1/container_node_pool_gpu_g4.tf
Outdated
Show resolved
Hide resolved
...ustom_compute_class/templates/manifests/gpu/g4-180gb/custom-compute-gpu-g4-180gb-s48-x1.yaml
Outdated
Show resolved
Hide resolved
...ustom_compute_class/templates/manifests/gpu/g4-360gb/custom-compute-gpu-g4-360gb-s96-x1.yaml
Outdated
Show resolved
Hide resolved
...ustom_compute_class/templates/manifests/gpu/g4-360gb/custom-compute-gpu-g4-360gb-s96-x1.yaml
Outdated
Show resolved
Hide resolved
...stom_compute_class/templates/manifests/gpu/g4-720gb/custom-compute-gpu-g4-720gb-s192-x1.yaml
Outdated
Show resolved
Hide resolved
...stom_compute_class/templates/manifests/gpu/g4-720gb/custom-compute-gpu-g4-720gb-s192-x1.yaml
Outdated
Show resolved
Hide resolved
...forms/gke/base/core/container_node_pool/gpu/region/us-central1/container_node_pool_gpu_g4.tf
Outdated
Show resolved
Hide resolved
docs/platforms/gke/base/use-cases/inference-ref-arch/online-inference-gpu/vllm-with-hf-model.md
Outdated
Show resolved
Hide resolved
docs/platforms/gke/base/use-cases/inference-ref-arch/online-inference-gpu/vllm-with-hf-model.md
Outdated
Show resolved
Hide resolved
docs/platforms/gke/base/use-cases/inference-ref-arch/online-inference-gpu/vllm-with-hf-model.md
Outdated
Show resolved
Hide resolved
.../templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-s192-x4.yaml
Outdated
Show resolved
Hide resolved
...class/templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-x4.yaml
Show resolved
Hide resolved
.../templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-s384-x8.yaml
Outdated
Show resolved
Hide resolved
...s/templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-s96-x2.yaml
Outdated
Show resolved
Hide resolved
...class/templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-x8.yaml
Show resolved
Hide resolved
...class/templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-x1.yaml
Show resolved
Hide resolved
...class/templates/manifests/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-x2.yaml
Show resolved
Hide resolved
…ts/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-s384-x8.yaml Co-authored-by: Aaron Rueth <[email protected]>
…ts/gpu/rtx-pro-6000-96gb/custom-compute-gpu-rtx-pro-6000-96gb-s96-x2.yaml Co-authored-by: Aaron Rueth <[email protected]>
merge main into sanjum-g4-gpus
ferrarimarco
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just minor things to check.
| export ACCELERATOR_TYPE="h200" | ||
| ``` | ||
| - **NVIDIA RTX 6000 96GB**: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RTX Pro
| - **NVIDIA RTX 6000 96GB**: | ||
| ```shell | ||
| export ACCELERATOR_TYPE="rtx-pro-6000" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This string is missing the 96gb suffix that you put in CCC definition. For simplicity, suggest removing the -96gb string from CCC names and their directory names.
| @@ -0,0 +1,6 @@ | |||
| APP_LABEL=vllm-rtx-pro-6000-gpt-oss-20b | |||
| GPU_MEMORY_UTILIZATION=0.95 | |||
| MAX_MODEL_LEN=131072 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly the same value as Gemma. Did you check this value for this gpt-oss-20b model?
| @@ -0,0 +1,7 @@ | |||
| APP_LABEL=vllm-rtx-pro-6000-llama-3-3-70b-instruct | |||
| GPU_MEMORY_UTILIZATION=0.95 | |||
| MAX_MODEL_LEN=131072 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly the same value as Gemma. Did you check this value for this llama3.3-70b model?
| @@ -0,0 +1,6 @@ | |||
| APP_LABEL=vllm-rtx-pro-6000-llama-4-scout-17b-16e-instruct | |||
| GPU_MEMORY_UTILIZATION=0.95 | |||
| MAX_MODEL_LEN=131072 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly the same value as Gemma. Did you check this value for this llama4 model?