Skip to content

Resize Image/Mask: mask output has width and height dimensions swapped when using lanczos scale method #12678

@edoardocarmignani

Description

@edoardocarmignani

Custom Node Testing

Expected Behavior

Image

The output mask should preserve the correct spatial axis order [B, H, W].
For an input of shape [1, 256, 512] scaled by 2.0, the expected output is:

[1, 512, 1024] # [B, H, W]

Actual Behavior

Image

The output mask has its height and width axes transposed:

[1, 1280, 720] # [B, W, H]

Steps to Reproduce

  1. Create a mask tensor of shape [B, H, W]
  2. Connect it to the Resize Image/Mask node
  3. Set scale_method to lanczos
  4. Set any resize type (e.g. Scale By with multiplier 2.0)
  5. Observe the shape of the output mask

lanczos_bug.json

Debug Logs

Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Checkpoint files will always be loaded safely.
Total VRAM 32607 MB, total RAM 130944 MB
pytorch version: 2.10.0+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 58924.0
working around nvidia conv3d memory bug.
Using pytorch attention
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 14 2025, 16:10:16) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.15.1
ComfyUI frontend version: 1.39.19
[Prompt Server] web root: ~\miniconda3\envs\comfyenv\Lib\site-packages\comfyui_frontend_package\static
Skipping loading of custom nodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.030s (created=0, skipped_existing=11, orphans_pruned=0, total_seen=11)
Starting server

Other

Note: While lanczos is generally not the recommended interpolation method for masks (nearest-exact or bilinear are more appropriate for binary/grayscale masks), the node should still handle the tensor dimensions correctly regardless of the chosen scale method.

The bug is in comfy/utils.py inside the lanczos() function.
Not affected: all other scale methods (area, bilinear, bicubic, nearest-exact) which use torch.nn.functional.interpolate and handle the tensor dimensions correctly.

I'll submit a PR with fix shortly

Metadata

Metadata

Assignees

No one assigned

    Labels

    Potential BugUser is reporting a bug. This should be tested.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions