Skip to content

Commit 19cfe11

Browse files
horheynmrahul-tuli
authored andcommitted
[Bug Fix] Fix test that requre GPU (#1096)
SUMMARY: Nightly tests uses gpus, ci tests dont. In the set up of `tests/llmcompressor/transformers/obcq/test_consecutive_runs.py`, a bug was found where `quantization_config` was always passed. This will result in an error if the model from the config file is not optimized. We are passing in dense model, so it fails. TEST PLAN: Run the failing test, make sure it passes! Signed-off-by: Rahul Tuli <[email protected]>
1 parent 6f02c91 commit 19cfe11

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

tests/llmcompressor/transformers/obcq/test_consecutive_runs.py

+6-1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
from transformers import AutoModelForCausalLM
99
from transformers.utils.quantization_config import CompressedTensorsConfig
1010

11+
from llmcompressor.transformers.utils import is_model_ct_quantized_from_path
1112
from llmcompressor.transformers.utils.helpers import infer_recipe_from_model_path
1213
from tests.testing_utils import parse_params, requires_gpu
1314

@@ -137,10 +138,14 @@ class TestConsecutiveRunsGPU(TestConsecutiveRuns):
137138
def setUp(self):
138139
from transformers import AutoModelForCausalLM
139140

141+
self.assertFalse(
142+
is_model_ct_quantized_from_path(self.model),
143+
"The provided model is quantized. Please use a dense model.",
144+
)
145+
140146
self.model = AutoModelForCausalLM.from_pretrained(
141147
self.model,
142148
device_map=self.device,
143-
quantization_config=self.quantization_config,
144149
)
145150

146151
self.output = "./oneshot_output"

0 commit comments

Comments
 (0)