-
Notifications
You must be signed in to change notification settings - Fork 12.2k
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
The output mask should preserve the correct spatial axis order [B, H, W].
For an input of shape [1, 256, 512] scaled by 2.0, the expected output is:
[1, 512, 1024] # [B, H, W]
Actual Behavior
The output mask has its height and width axes transposed:
[1, 1280, 720] # [B, W, H]
Steps to Reproduce
- Create a mask tensor of shape
[B, H, W] - Connect it to the Resize Image/Mask node
- Set
scale_methodtolanczos - Set any resize type (e.g. Scale By with multiplier
2.0) - Observe the shape of the output mask
Debug Logs
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Checkpoint files will always be loaded safely.
Total VRAM 32607 MB, total RAM 130944 MB
pytorch version: 2.10.0+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 58924.0
working around nvidia conv3d memory bug.
Using pytorch attention
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 14 2025, 16:10:16) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.15.1
ComfyUI frontend version: 1.39.19
[Prompt Server] web root: ~\miniconda3\envs\comfyenv\Lib\site-packages\comfyui_frontend_package\static
Skipping loading of custom nodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.030s (created=0, skipped_existing=11, orphans_pruned=0, total_seen=11)
Starting serverOther
Note: While
lanczosis generally not the recommended interpolation method for masks (nearest-exact or bilinear are more appropriate for binary/grayscale masks), the node should still handle the tensor dimensions correctly regardless of the chosen scale method.
The bug is in comfy/utils.py inside the lanczos() function.
Not affected: all other scale methods (area, bilinear, bicubic, nearest-exact) which use torch.nn.functional.interpolate and handle the tensor dimensions correctly.
I'll submit a PR with fix shortly