Skip to content

Comfy generates noise after cancelling Qwen Image Edit Nunchaku generation #10550

@MoreColors123

Description

@MoreColors123

Custom Node Testing

Expected Behavior

Followup generations in Nunchaku Qwen Image Edit are working normally, after interrupting a generation.

Actual Behavior

Since the update i did on 24th of October for Comfy Portable (edit: also with Desktop Comfy) I have an issue whenever I do this:

  1. Generate a pic with Nunchaku Qwen Image Edit
  2. Interrupt the process
    -> Any following Nunchaku Qwen Image Edit generation will be noise (see below), and also the final picture is noise, not just the preview.

The only solution I found is to click the "clear models and node cache" button on the upper right between generations - which increases generation time a lot, of course.
And reverting to 0.3.65 helps too, still I don't understand why.

This does NOT happen with any other regular QWEN-Models, also not with the QWEN Edit FP8 Version or any other models at all. It even doesn't happen with Nunchaku Flux Kontext. Just with every Nunchaku Qwen Image Edit i have, and which worked perfectly before. The Workflow used is basically the Template Workflow for Qwen Image Edit, I just replaced the model loader with the Nunchaku one.

Image Image Image

Steps to Reproduce

  1. Generate a pic with Nunchaku Qwen Image Edit
  2. Interrupt the process
    Problem: Any following Nunchaku Qwen Image Edit generation will be noise

Debug Logs

got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Using scaled fp8: fp8 matrix mult: False, scale input: False
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load WanVAE
loaded completely 2307.8046875 242.02829551696777 True
Requested to load QwenImageTEModel_
loaded completely 13243.646704483031 7909.737449645996 True
loaded completely 13243.646704483031 7909.737449645996 True
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Enabling CPU offload
Requested to load NunchakuQwenImage
 25%|█████████████████████████████████████▊                                                                                                                 | 2/8 [00:03<00:09,  1.60s/it]Interrupting prompt c4c29a20-716d-46c4-8614-29cd1f927f9e
 25%|█████████████████████████████████████▊                                                                                                                 | 2/8 [00:03<00:10,  1.72s/it]
Processing interrupted
Prompt executed in 23.99 seconds
got prompt
 12%|██████████████████▉                                                                                                                                    | 1/8 [00:01<00:11,  1.60s/it]Interrupting prompt 1b931387-9d97-4f32-bc3f-b87168645a08
 12%|██████████████████▉                                                                                                                                    | 1/8 [00:02<00:20,  2.87s/it]
Processing interrupted
Prompt executed in 2.89 seconds

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Potential BugUser is reporting a bug. This should be tested.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions