Skip to content

Conversation

@joerunde
Copy link
Collaborator

Description

This changes the warmup for static batching back to how it originally was, only warming up a single pass.

This fixes a bug where the compiled model graphs were incorrect- we would invoke the model with one batch size and the model would output a different batch size.

@github-actions
Copy link

👋 Hi! Thank you for contributing to vLLM support on Spyre.
Just a reminder: Make sure that your code passes all the linting checks, otherwise your PR won't be able to be merged. To do so, first install the linting requirements, then run format.sh and commit the changes. This can be done with uv directly:

uv sync --frozen --group lint --active --inexact

Or this can be done with pip:

uv pip compile --group lint > requirements-lint.txt
pip install -r requirements-lint.txt
bash format.sh

Now you are good to go 🚀

@prashantgupta24 prashantgupta24 enabled auto-merge (squash) June 19, 2025 17:19
@github-actions github-actions bot added the ready label Jun 19, 2025
@prashantgupta24 prashantgupta24 merged commit 0ceea20 into main Jun 19, 2025
18 of 20 checks passed
@prashantgupta24 prashantgupta24 deleted the fix-warmup branch June 19, 2025 17:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants