Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: flaky test case Functionality test for helm chart / Multiple-Models #152

Open
gaocegege opened this issue Feb 19, 2025 · 2 comments
Open
Labels
bug Something isn't working

Comments

@gaocegege
Copy link
Collaborator

Describe the bug

In the PR #119 the test The test Functionality test for helm chart / Multiple-Models (pull_request) is flaky. Sometimes it fails in helm uninstall, and sometimes fails in helm install.

To Reproduce

N/A

Expected behavior

No response

Additional context

No response

@gaocegege gaocegege added the bug Something isn't working label Feb 19, 2025
@gaocegege gaocegege changed the title bug: flaky test case The test [Functionality test for helm chart / Multiple-Models](https://github.com/vllm-project/production-stack/actions/runs/13390271739/job/37440021952?pr=119) bug: flaky test case Functionality test for helm chart / Multiple-Models Feb 19, 2025
@Shaoting-Feng
Copy link
Collaborator

I reviewed the GitHub Actions records and believe the issue was caused by the workflow server being occupied at that time. We will continue monitoring the workflow to determine whether there is a bug and, if so, identify the exact error message.

@ApostaC
Copy link
Collaborator

ApostaC commented Feb 20, 2025

Currently, the functionality test is running on a self-hosted GPU runner we host.

I think we should move to a cpu-only vLLM image for functionality testing so that we can use github's runner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants