Skip to content

Commit 143ec4b

Browse files
committed
unpermute for weight option as well
1 parent e8053dd commit 143ec4b

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

src/llmcompressor/modifiers/quantization/gptq/utils/gptq_wrapper.py

+6-1
Original file line numberDiff line numberDiff line change
@@ -259,7 +259,12 @@ def compress(
259259
self._log_metrics(tick, Losses)
260260

261261
if strategy == QuantizationStrategy.GROUP:
262-
if actorder == ActivationOrderingStrategy.GROUP:
262+
if actorder == ActivationOrderingStrategy.WEIGHT:
263+
# restore original permutation
264+
invperm = torch.argsort(perm)
265+
W = W[:, invperm]
266+
267+
elif actorder == ActivationOrderingStrategy.GROUP:
263268
# restore original permutation
264269
invperm = torch.argsort(perm)
265270
W = W[:, invperm]

0 commit comments

Comments
 (0)