Skip to content

Account for memory usage of other processes #18858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

maxdebayser
Copy link
Contributor

@maxdebayser maxdebayser commented May 28, 2025

FIX #18854

In determine_available_memory, the initial memory usage should be taken into account.

Consider this script running on a 80GB A100:

from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# makes sure gpu_memory_utilization is per-instance limit,
# not a global limit
llms = [
    LLM(model="facebook/opt-125m",
        gpu_memory_utilization=0.3,
        enforce_eager=True) for i in range(3)
]
for llm in llms:
    outputs = llm.generate(prompts, sampling_params)
    for output in outputs:
        prompt = output.prompt
        generated_text = output.outputs[0].text
        print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

If I print the memory metrics without my changes, the second process already crashes because the estimated available memory is negative:

self.init_gpu_memory=84530692096
total_allocated_bytes=844103680
torch_allocated_bytes=278004224
total_gpu_memory=84974239744
peak_memory=770516480
non_torch_allocations=566099456
self.cache_config.gpu_memory_utilization=0.3
available_memory=24155655987

self.init_gpu_memory=59029782528
total_allocated_bytes=26345013248
torch_allocated_bytes=278004224
total_gpu_memory=84974239744
peak_memory=770516480
non_torch_allocations=26067009024
self.cache_config.gpu_memory_utilization=0.3
available_memory=-1345253580

Taking the initial memory into account, the 3 processes can run and the estimated memory is an exact match for the 3 processes:

self.init_gpu_memory=84530692096
total_allocated_bytes=844103680
torch_allocated_bytes=278004224
total_gpu_memory=84974239744
peak_memory=770516480
non_torch_allocations=122551808
self.cache_config.gpu_memory_utilization=0.3
available_memory=24599203635

self.init_gpu_memory=58576797696
total_allocated_bytes=26797998080
torch_allocated_bytes=278004224
total_gpu_memory=84974239744
peak_memory=770516480
non_torch_allocations=122551808
self.cache_config.gpu_memory_utilization=0.3
available_memory=24599203635

self.init_gpu_memory=32622903296
total_allocated_bytes=52751892480
torch_allocated_bytes=278004224
total_gpu_memory=84974239744
peak_memory=770516480
non_torch_allocations=122551808
self.cache_config.gpu_memory_utilization=0.3
available_memory=24599203635

In `determine_available_memory`, the initial memory
usage should be taken into account.

Signed-off-by: Max de Bayser <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label May 28, 2025
@DarkLight1337 DarkLight1337 requested a review from youkaichao May 29, 2025 02:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Non-torch memory tracking fails to account for gpu usage of other processes
1 participant