Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 1.04 KB

QLoRA_Efficient_Finetuning_of_Quantized_LLMs.md

File metadata and controls

9 lines (7 loc) · 1.04 KB

QLoRA: Efficient Finetuning of Quantized LLMs

This week's paper is QLoRA: Efficient Finetuning of Quantized LLMs. QLoRA introduced a way to save memory using quantization in LoRA training. They also introduce the Guanaco family of models, and do some analyses on fine-tuning data.

Further Reading: