-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update finetune and oneshot tests #114
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
- name: "🔬 Running transformers tests" | ||
if: always() && steps.install.outcome == 'success' | ||
run: | | ||
pytest tests/llmcompressor/transformers/compression -v |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the tests under obcq/gptq/oneshot/sparsification?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They're all running. Lines 117 onwards. Just separating them out into separate steps
device: "auto" | ||
device: "cuda:0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the error you were getting with auto? I think we want at least one test confirming that auto is functional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Unspecified CUDA launch errors" - this was on an a100
…ue with the W4A8 representation to have dynamic token for activations (vllm-project#114)
SUMMARY:
integration
tests asunit
cuda: 0