[Feature] A calibration-free RTN-based quantization for accurate and accelerated INT4/INT8 inference #18768
+319
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds basic support for RTN quantization, as a first step for supporting a calibration-free RTN-based quantization for accurate and accelerated INT4/INT8 inference (see this paper for details).
RTN is a simple quantization method that does not require any calibration data nor a corresponding calibration process.
As such, it can be applied on-the-fly (i.e., while loading an original model) in a fast and cheap way, even on a system that does not have enough memory to host the original (unquantized) model. Yet, RTN is often believed to lag behind more advanced quantization techniques in two crucial areas – generation throughput and accuracy.
As this paper shows, both issues can be alleviated, through the use of efficient CUDA kernels based on Marlin (for throughput) and selective quantization (for accuracy). The latter is a simple mechanism that allows a user to select layers and/or specific linear modules that should be quantized to a higher precision. For instance, leaving just a part of one layer of Llama-3.1 70B model in 8 bit precision, while quantizing the rest of that layer and all other 79 layers into 4 bits leads to a substantially improved recovery rate, on-par with or better than other techniques:

Note that this adds less than 0.05 bits per weight on average, resulting in only insignificant memory increase.
As noted above, this PR is for basic Python-based implementation for RTN that supports quantizing models on-the-fly.
Once approved, we intend to enhance it with: