Skip to content

Conversation

@red-hat-konflux
Copy link

@red-hat-konflux red-hat-konflux bot commented May 17, 2025

This PR contains the following updates:

Package Change Age Confidence
peft ==0.3.0 -> ==0.18.0 age confidence

Warning

Some dependencies could not be looked up. Check the warning logs for more information.


Release Notes

huggingface/peft (peft)

v0.18.0: 0.18.0: RoAd, ALoRA, Arrow, WaveFT, DeLoRA, OSF, and more

Compare Source

Highlights
peft-v0 18 0

FIXME update list of all changes, so some more commits were added

New Methods
RoAd

@​ppetrushkov added RoAd: 2D Rotary Adaptation to PEFT in #​2678. RoAd learns 2D rotation matrices that are applied using only element-wise multiplication, thus promising very fast inference with adapters in unmerged state.

Remarkably, besides LoRA, RoAd is the only PEFT method that supports mixed adapter batches. This means that when you have loaded a model with multiple RoAd adapters, you can use all of them for different samples in the same batch, which is much more efficient than switching adapters between batches:

model = PeftModel.from_pretrained(base_model, <path-to-road-adapter-A>, adapter_name="adapter-A")
model.add_adapter("adapter-B", <path-to-road-adapter-B>)

inputs = ...  # input with 3 samples

##### apply adapter A to sample 0, adapter B to sample 1, and use the base model for sample 2:
adapter_names = ["adapter-A", "adapter-B", "__base__"]
output_mixed = model(**inputs, adapter_names=adapter_names)
gen_mixed = model.generate(**inputs, adapter_names=adapter_names)
ALoRA

Activated LoRA is a technique added by @​kgreenewald in #​2609 for causal language models, allowing to selectively enable LoRA adapters depending on a specific token invocation sequence in the input. This has the major benefit of being able to re-use most of the KV cache during inference when the adapter is only used to generate part of the response, after which the base model takes over again.

Arrow & GenKnowSub

@​TheTahaaa contributed not only support for Arrow, a dynamic routing algorithm between multiple loaded LoRAs in #​2644, but also GenKnowSub, a technique built upon Arrow where the 'library' of LoRAs available to Arrow is first modified by subtracting general knowledge adapters (e.g., trained on subsets of Wikipedia) to enhance task-specific performance.

WaveFT

Thanks to @​Bilican, Wavelet Fine-Tuning (WaveFT) was added to PEFT in #​2560. This method trains sparse updates in the wavelet domain of residual matrices, which is especially parameter efficient. It is very interesting for image generation, as it promises to generate diverse outputs while preserving subject fidelity.

DeLoRA

Decoupled Low-rank Adaptation (DeLoRA) was added by @​mwbini in #​2780. This new PEFT method is similar to DoRA in so far as it decouples the angle and magnitude of the learned adapter weights. However, DeLoRA implements this in a way that promises to better prevent divergence. Moreover, it constrains the deviation of the learned weight by imposing an upper limit of the norm, which can be adjusted via the delora_lambda parameter.

OSF

Orthogonal Fine-Tuning (OSF) was added by @​NikhilNayak-debug in #​2685. By freezing the high-rank subspace of the targeted weight matrices and projecting gradient updates to a low-rank subspace, OSF achieves good performance on continual learning tasks. While it is a bit memory intensive for standard fine-tuning processes, it is definitely worth checking out on tasks where performance degradation of previously learned tasks is a concern.

Enhancements
Text generation benchmark

In #​2525, @​ved1beta added the text generation benchmark to PEFT. This is a framework to determine and compare metrics with regard to text generation of different PEFT methods, e.g. runtime and memory usage. Right now, this benchmark is still lacking experimental settings and a visualization, analogous to what we have in the MetaMathQA benchmark. If this is something that interests you, we encourage you to let us know or, even better, contribute to this benchmark.

Reliable interface for integrations

PEFT has integrations with other libraries like Transformers and Diffusers. To facilitate this integration, PEFT now provides a stable interface of functions that should be used if applicable. For example, the set_adapter function can be used to switch between PEFT adapters on the model, even if the model is not a PeftModel instance. We commit to keeping these functions backwards compatible, so it's safe for other libraries to build on top of those.

Handling of weight tying

Some Transformers models can have tied weights. This is especially prevalent when it comes to the embedding and the LM head. Currently, the way that this is handled in PEFT is not obvious. We thus drafted an issue to illustrate the intended behavior in #​2864. This shows what our goal is, although not everything is implemented yet.

In #​2803, @​romitjain added the ensure_weight_tying argument to LoraConfig. This argument, if set to True, enforces weight tying of the modules targeted with modules_to_save. Thus, if embedding and LM head are tied, they will share weights, which is important to allow, for instance, weight merging. Therefore, for most users, we recommend to enable this setting if they want to fully fine-tune the embedding and LM head. For backwards compatability, the setting is off by default though.

Note that in accordance with #​2864, the functionality of ensure_weight_tying=True will be expanded to also include trainable tokens (#​2870) and LoRA (tbd.) in the future.

Support Conv1d and 1x1 Conv2 layers in LoHa and LoKr

@​grewalsk extended LoHa and LoKr to support nn.Conv1d layers, as well as nn.Conv2d with 1x1 kernels, in #​2515.

New prompt tuning initialization

Thanks to @​macmacmacmac, we now have a new initialization option for prompt tuning, random discrete initialization (#​2815). This option should generally work better than random initialization, as corroborated on our PEFT method comparison suite. Give it a try if you use prompt tuning.

Combining LoRA adapters with negative weights

If you use multiple LoRA adapters, you can merge them into a single adapter using model.add_weighted_adapter. However, so far, this only worked with positive weights per adapter. Thanks to @​sambhavnoobcoder and @​valteu, it is now possible to pass negative weights too.

Changes
Transformers compatibility

At the time of writing, the Transformers v5 release is imminent. This Transformers version will be incomptabile with PEFT < 0.18.0. If you plan to use Transformers v5 with PEFT, please upgrade PEFT to 0.18.0+.

Python version

This PEFT version no longer supports Python 3.9, which has reached its end of life. Please use Python 3.10+.

Updates to OFT

The OFT method has been updated to make it slightly faster and to stabilize the numerics in #​2805. This means, however, that existing checkpoints may give slightly different results after upgrading to PEFT 0.18.0. Therefore, if you use OFT, we recommend to retrain the adapter.

All Changes
New Contributors

Full Changelog: huggingface/peft@v0.17.1...v0.18.0

v0.17.1: 0.17.1

Compare Source

This patch release contains a few fixes (via #​2710) for the newly introduced target_parameters feature, which allows LoRA to target nn.Parameters directly (useful for mixture of expert layers). Most notably:

  • PEFT no longer removes possibly existing parametrizations from the parameter.
  • Adding multiple adapters (via model.add_adapter or model.load_adapter) did not work correctly. Since a solution is not trivial, PEFT now raises an error to prevent this situation.

v0.17.0: 0.17.0: SHiRA, MiSS, LoRA for MoE, and more

Compare Source

Highlights
peft-v0 17 0
New Methods
SHiRA

@​kkb-code contributed Sparse High Rank Adapters (SHiRA, paper) which promise to offer a potential gain in performance over LoRAs - especially the concept loss when using multiple adapters is improved. Since the adapters only train on 1-2% of the weights and are inherently sparse, switching between adapters may be cheaper than with LoRAs. (#​2584)

MiSS

@​JL-er added a new PEFT method, MiSS (Matrix Shard Sharing) in #​2604. This method is an evolution of Bone, which, according to our PEFT method comparison benchmark, gives excellent results when it comes to performance and memory efficiency. If you haven't tried it, you should do so now.

At the same time, Bone will be deprecated in favor of MiSS and will be removed in PEFT v0.19.0. If you already have a Bone checkpoint, you can use scripts/convert-bone-to-miss.py to convert it into a MiSS checkpoint and proceed with training using MiSS.

Enhancements
LoRA for nn.Parameter

LoRA is now able to target nn.Parameter directly (#​2638, #​2665)! Ever had this complicated nn.Module with promising parameters inside but it was too custom to be supported by your favorite fine-tuning library? No worries, now you can target nn.Parameters directly using the target_parameters config attribute which works similarly to target_modules.

This option can be especially useful for models with Mixture of Expert (MoE) layers, as those often use nn.Parameters directly and cannot be targeted with target_modules. For example, for the Llama4 family of models, use the following config to target the MoE weights:

config = LoraConfig(
    ...,
    target_modules=[],  # <= prevent targeting any modules
    target_parameters=["feed_forward.experts.down_proj", "feed_forward.experts.gate_up_proj"],
)

Note that this feature is still experimental as it comes with a few caveats and therefore might change in the future. Also, MoE weights with many experts can be quite huge, so expect a higher memory usage than compared to targeting normal nn.Linear layers.

Injecting adapters based on a state_dict

Sometimes, it is possible that there is a PEFT adapter checkpoint but the corresponding PEFT config is not known for whatever reason. To inject the PEFT layers for this checkpoint, you would usually have to reverse-engineer the corresponding PEFT config, most notably the target_modules argument, based on the state_dict from the checkpoint. This can be cumbersome and error prone. To avoid this, it is also possible to call inject_adapter_in_model and pass the loaded state_dict as an argument:

from safetensors.torch import load_file
from peft import LoraConfig, inject_adapter_in_model

model = ...
state_dict = load_file(<path-to-safetensors-file>)
lora_config = LoraConfig()  # <= no need to specify further
model = inject_adapter_in_model(lora_config, model, state_dict=state_dict)

Find more on state_dict based injection in the docs.

Changes
Compatibility

A bug in prompt learning methods caused modules_to_save to be ignored. Especially classification tasks are affected since they usually add the classification/score layer to modules_to_save. In consequence, these layers were neither trained nor stored after training. This has been corrected now. (#​2646)

All Changes
New Contributors

Full Changelog: huggingface/peft@v0.16.0...v0.17.0

v0.16.0: 0.16.0: LoRA-FA, RandLoRA, C³A, and much more

Compare Source

Highlights

peft-v0 16 0

New Methods
LoRA-FA

In #​2468, @​AaronZLT added the LoRA-FA optimizer to PEFT. This optimizer is based on AdamW and it increases memory efficiency of LoRA training. This means that you can train LoRA with less memory, or, with the same memory budget, use higher LoRA ranks, potentially getting better results.

RandLoRA

Thanks to @​PaulAlbert31, a new PEFT method called RandLoRA was added to PEFT (#​2464). Similarly to VeRA, it uses non-learnable random low rank matrices that are combined through learnable matrices. This way, RandLoRA can approximate full rank updates of the weights. Training models quantized with bitsandbytes is supported.

C³A

@​Phoveran added Circular Convolution Adaptation, C3A, in #​2577. This new PEFT method can overcome the limit of low rank adaptations as seen e.g. in LoRA while still promising to be fast and memory efficient.

Enhancements

Thanks to @​gslama12 and @​SP1029, LoRA now supports Conv2d layers with groups != 1. This requires the rank r being divisible by groups. See #​2403 and #​2567 for context.

@​dsocek added support for Intel Neural Compressor (INC) quantization to LoRA in #​2499.

DoRA now supports Conv1d layers thanks to @​EskildAndersen (#​2531).

Passing init_lora_weights="orthogonal" now enables orthogonal weight initialization for LoRA (#​2498).

@​gapsong brought us Quantization-Aware LoRA training in #​2571. This can make QLoRA training more efficient, please check the included example. Right now, only GPTQ is supported.

There has been a big refactor of Orthogonal Finetuning, OFT, thanks to @​zqiu24 (#​2575). This makes the PEFT method run more quickly and require less memory. It is, however, incompatible with old OFT checkpoints. If you have old OFT checkpoints, either pin the PEFT version to <0.16.0 or retrain it with the new PEFT version.

Thanks to @​keepdying, LoRA hotswapping with compiled models no longer leads to CUDA graph re-records (#​2611).

Changes
Compatibility
  • #​2481: The value of required_grads_ of modules_to_save is now set to True when used directly with inject_adapter. This is relevant for PEFT integrations, e.g. Transformers or Diffusers.
  • Due to a big refactor of vision language models (VLMs) in Transformers, the model architecture has been slightly adjusted. One consequence of this is that if you use a PEFT prompt learning method that is applied to vlm.language_model, it will no longer work, please apply it to vlm directly (see #​2554 for context). Morever, the refactor results in different checkpoints. We managed to ensure backwards compatability in PEFT, i.e. old checkpoints can be loaded successfully. There is, however, no forward compatibility, i.e. loading checkpoints trained after the refactor is not possible with package versions from before the refactor. In this case, you need to upgrade PEFT and transformers. More context in #​2574.
  • #​2579: There have been bigger refactors in Transformers concerning attention masks. This required some changes on the PEFT side which can affect prompt learning methods. For prefix tuning specifically, this can result in numerical differences but overall performance should be the same. For other prompt learning methods, numerical values should be the same, except if the base model uses 4d attention masks, like Gemma. If you load old prompt learning checkpoints, please double-check that they still perform as expected, especially if they're trained on Gemma or similar models. If not, please re-train them or pin PEFT and transformers to previous versions (<0.16.0 and <4.52.0, respectively).
All Changes

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.


Documentation

Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 15089424447

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on konflux/mintmaker/konflux-poc/peft-0.x at 93.407%

Totals Coverage Status
Change from base Build 15020007478: 93.4%
Covered Lines: 85
Relevant Lines: 91

💛 - Coveralls

@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/peft-0.x branch from a3dffa8 to 10afd8b Compare July 5, 2025 05:02
@coderabbitai
Copy link

coderabbitai bot commented Jul 5, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

@red-hat-konflux red-hat-konflux bot changed the title Update dependency peft to v0.15.2 Update dependency peft to v0.16.0 Jul 5, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/peft-0.x branch from 10afd8b to b908a64 Compare August 9, 2025 08:24
@red-hat-konflux red-hat-konflux bot changed the title Update dependency peft to v0.16.0 Update dependency peft to v0.17.0 Aug 9, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/peft-0.x branch from b908a64 to 2f9947a Compare August 23, 2025 08:42
@red-hat-konflux red-hat-konflux bot changed the title Update dependency peft to v0.17.0 Update dependency peft to v0.17.1 Aug 23, 2025
Signed-off-by: red-hat-konflux <126015336+red-hat-konflux[bot]@users.noreply.github.com>
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/peft-0.x branch from 2f9947a to ea2b697 Compare November 13, 2025 13:00
@red-hat-konflux red-hat-konflux bot changed the title Update dependency peft to v0.17.1 Update dependency peft to v0.18.0 Nov 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant