Skip to content

Commit

Permalink
default W4A16 alias to use group_size=128 (vllm-project#94)
Browse files Browse the repository at this point in the history
  • Loading branch information
bfineran authored Jun 20, 2024
1 parent 75436f6 commit 6319bc1
Showing 1 changed file with 1 addition and 3 deletions.
4 changes: 1 addition & 3 deletions src/compressed_tensors/quantization/quant_scheme.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,7 @@ def is_preset_scheme(name: str) -> bool:

W8A8 = dict(weights=QuantizationArgs(), input_activations=QuantizationArgs())

W4A16 = dict(
weights=QuantizationArgs(num_bits=4, strategy=QuantizationStrategy.CHANNEL)
)
W4A16 = dict(weights=QuantizationArgs(num_bits=4, group_size=128))

FP8 = dict(
weights=QuantizationArgs(type=QuantizationType.FLOAT),
Expand Down

0 comments on commit 6319bc1

Please sign in to comment.