Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bdellabe/Rtuli awq modifier v3 #1177

Open
wants to merge 22 commits into
base: main
Choose a base branch
from

Conversation

brian-dellabetta
Copy link
Collaborator

@brian-dellabetta brian-dellabetta commented Feb 19, 2025

SUMMARY:
Addition of AWQModifier, based on AutoAWQ implementation.

Should be reviewed/merged in conjunction with neuralmagic/compressed-tensors#269

Replaces #181 and #824 (hence v3)

TEST PLAN:
Some unit tests included, but as this was mostly a port from AutoAWQ, we validated the code by ensuring we could reproduce the evaluation metrics in Table 4 of the paper. We achieve the following wikitext PPL scores:

Llama-2 7B Group 128:

  1. Paper: 5.60
  2. AutoAWQ: 5.615
  3. This implementation: 5.612
  4. we match what the paper reports for just RTN -- 5.73
  5. We get reasonable results for channel-wise -- 6.788. AutoAWQ errors out for this (setting "q_group_size": -1 in the quant_config), and results not reported in paper.

Llama-2 13B Group 128:

  1. We match the results of AutoAWQ and the results shown in the paper: 4.97
  2. We match what the paper reports for just RTN -- 4.984

NOTE: We are excluding the clipping logic in this implementation, if we want to add it we should add it as another modifier, they are mutually exclusive and the data model for AWQ doesn't align well with clipping. That might be the reason for the slight deviation of results reported in the paper and in our implementation

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
Signed-off-by: Brian Dellabetta <[email protected]>
@brian-dellabetta brian-dellabetta changed the title Bdellabe/awq modifier v3 Bdellabe/Rtuli awq modifier v3 Mar 10, 2025
@brian-dellabetta brian-dellabetta marked this pull request as ready for review March 10, 2025 21:45
Comment on lines +48 to +50
# TODO this should only be added if v_proj/o_proj shapes match up
# should we check during validation and skip if this is not the case?
AWQMapping("re:.*v_proj", ["re:.*o_proj"]),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the one TODO. The logic for this in AutoAWQ is to only add this mapping if the shapes line up correctly (logic here). This is the case for the llama 2 models i've been testing on, but not all of the tiny llama models. Any suggestion on how best to handle for both cases?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PPL is 5.607 for llama 2-7B when this included, 5.614 when it isn't.

Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add evals comparing to GPTQ?

Signed-off-by: Brian Dellabetta <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants