11/08/24: I don't think this node is needed for lora anymore since SimpleTuner fixed their lora finetuning to not nuke the cfg distillation. It's still a neat node if you want cfg without relying on dynamic thresholding though.
Just a quick and dirty fork of https://github.com/asagi4/ComfyUI-Adaptive-Guidance to enable very trained Flux-dev loras to work properly. I think if you train them too hard, they essentially un-distill the model, so you need to introduce CFG back into your inference. But using CFG quickly fries the image. So to avoid having to do dynamic thresholding, which results in reduced output quality, you can now skip applying cfg to the first few steps.
Of course this also speeds up inference a little, which is great if you want to finish generating your images before getting put into an assisted living facility by your grandkids.
I recommend skipping 2-6 initial steps. Experiment and have fun.
There's an AdaptiveGuidance
node (under sampling/custom_sampling/guiders
) that can be used with SamplerCustomAdvanced
. Normally, you should keep the threshold quite high, between 0.99
and 1.0
The node calculates the cosine similarity between the u-net's conditional and unconditional output ("positive" and "negative" prompts) and once the similarity crosses the specified threshold, it sets CFG to 1.0, effectively skipping negative prompt calculations and speeding up inference.
I'm not sure if the cosine similarity calculation matches the original paper since I had to translate from maths to Python, but it appears to work.
Set uncond_zero_scale to > 0 to enable "uncond zero" CFG after the normal CFG gets disabled. Stolen from https://github.com/Extraltodeus/Uncond-Zero-for-ComfyUI
It seems to work slightly better than just running without CFG, but YMMV
Note: this functionality is unstable and will probably change, so using it means your workflows likely won't be perfectly reproducible.