Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iteratively update quantization parameters in GPTQ #178

Closed
wants to merge 1 commit into from

Conversation

kylesayrs
Copy link
Collaborator

No description provided.

@kylesayrs kylesayrs marked this pull request as draft September 16, 2024 21:51
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

@kylesayrs kylesayrs changed the title Update parameters after block Iteratively update quantization parameters in GPTQ Sep 18, 2024
@kylesayrs
Copy link
Collaborator Author

Better implementation is in kylesayrs/replication

@kylesayrs kylesayrs closed this Sep 18, 2024
markmc pushed a commit to markmc/llm-compressor that referenced this pull request Nov 13, 2024
* set num_groups to 1 if if < 1

* Update src/compressed_tensors/quantization/lifecycle/initialize.py

Co-authored-by: Kyle Sayers <[email protected]>

---------

Co-authored-by: Kyle Sayers <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant