Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation running slow all of a sudden #15

Open
dapa5900 opened this issue Jan 8, 2025 · 8 comments
Open

Segmentation running slow all of a sudden #15

dapa5900 opened this issue Jan 8, 2025 · 8 comments

Comments

@dapa5900
Copy link

dapa5900 commented Jan 8, 2025

Recently, the segmentation runs very slow. Before christmas the same image used to take under 1sec on my 4090, now at least 2-3sec and sometimes even over 15sec (it seems that is goes to cpu mode even though the vram is not full).

@petercham
Copy link
Contributor

Recently, the segmentation runs very slow. Before christmas the same image used to take under 1sec on my 4090, now at least 2-3sec and sometimes even over 15sec (it seems that is goes to cpu mode even though the vram is not full).

Didn't notice the problem you mentioned, if it took that long, the model should have been cleared from the cache, please check if you are using other very memory intensive models

@dapa5900
Copy link
Author

dapa5900 commented Jan 9, 2025 via email

@lldacing
Copy link
Owner

It still persists here but only on the Rmbg and Rmbg Advance nodes. On the Getmask node with the Portrait model it is still the fastest segmentation of all the ones I tested with 0.5sec max. Am Mi., 8. Jan. 2025 um 14:07 Uhr schrieb HJH_Chenhe < @.>:

Recently, the segmentation runs very slow. Before christmas the same image used to take under 1sec on my 4090, now at least 2-3sec and sometimes even over 15sec (it seems that is goes to cpu mode even though the vram is not full). Didn't notice the problem you mentioned, if it took that long, the model should have been cleared from the cache, please check if you are using other very memory intensive models — Reply to this email directly, view it on GitHub <#15 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR5YUKWBK7OJU7PJ4AFDR3T2JUPKZAVCNFSM6AAAAABUZHTN4SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZXGYZTCNBTGE . You are receiving this because you authored the thread.Message ID: @.
>

image
These three processes are equivalent. The difference from the previous one is that it is changed to fast-foreground-estimation to synthesize the image. You can use other nodes to mix the image and the mask instead.

这三种处理是等价的,与之前的差别是改成了fast-foreground-estimation来合成图片,你可以使用其它节点来混合图片和遮罩代替

@dapa5900
Copy link
Author

Thanks. Maybe that's the reason it's running so slow then...? Can you recommend a solution that yields similar results like yours? I use "ImageRemoveAlpha" from "LayerStyles"-custom nodes but it gives this Halo around the Edge (right side of the image) compared to what I get out of using your "RmbgByBiRefNet"-node directly (left side). Thank you.

halo

@lldacing
Copy link
Owner

Thanks. Maybe that's the reason it's running so slow then...? Can you recommend a solution that yields similar results like yours? I use "ImageRemoveAlpha" from "LayerStyles"-custom nodes but it gives this Halo around the Edge (right side of the image) compared to what I get out of using your "RmbgByBiRefNet"-node directly (left side). Thank you.

halo

@dapa5900 I code a new node to reproduce the original version effect, you can add the node to file birefnetNode.py and test. I am not sure it works for you. If it works, I will merge it.

class GetForegroundImageSimple:

    @classmethod
    def INPUT_TYPES(cls):
        return {
            "required": {
                "image": ("IMAGE",),
                "mask": ("MASK", ),
            }
        }

    RETURN_TYPES = ("IMAGE",)
    RETURN_NAMES = ("image",)
    FUNCTION = "get_image"
    CATEGORY = "rembg/BiRefNet"

    def get_image(self, image, mask):
        # image.shape => (b, h, w, c)
        # mask.shape => (b, h, w)

        # You can open the code to see if it has any impact
        # mask = normalize_mask(mask)

        image = add_mask_as_alpha(image, mask)

        return image,


NODE_CLASS_MAPPINGS = {
    "AutoDownloadBiRefNetModel": AutoDownloadBiRefNetModel,
    "LoadRembgByBiRefNetModel": LoadRembgByBiRefNetModel,
    "RembgByBiRefNet": RembgByBiRefNet,
    "RembgByBiRefNetAdvanced": RembgByBiRefNetAdvanced,
    "GetMaskByBiRefNet": GetMaskByBiRefNet,
    "BlurFusionForegroundEstimation": BlurFusionForegroundEstimation,
    "GetForegroundImageSimple": GetForegroundImageSimple,
}

NODE_DISPLAY_NAME_MAPPINGS = {
    "AutoDownloadBiRefNetModel": "AutoDownloadBiRefNetModel",
    "LoadRembgByBiRefNetModel": "LoadRembgByBiRefNetModel",
    "RembgByBiRefNet": "RembgByBiRefNet",
    "RembgByBiRefNetAdvanced": "RembgByBiRefNetAdvanced",
    "GetMaskByBiRefNet": "GetMaskByBiRefNet",
    "BlurFusionForegroundEstimation": "BlurFusionForegroundEstimation",
    "GetForegroundImageSimple": "GetForegroundImageSimple",
}

image

@dapa5900
Copy link
Author

dapa5900 commented Jan 14, 2025

Thank you. It works but on a black background it gives the same result with the halo like the above-mentioned approach (although it's faster on my side, which is good :-)) Please find attached a workflow that compares these three approaches for an example input image

ModelBase_Girl_Wide_00027_
compare.json
:

@lldacing
Copy link
Owner

Thank you. It works but on a black background it gives the same result with the halo like the above-mentioned approach (although it's faster on my side, which is good :-)) Please find attached a workflow that compares these three approaches for an example input image

ModelBase_Girl_Wide_00027_ compare.json :

It looks like there is no difference with LayerStyle, I guess this is what fast-foreground-estimation solves.

@dapa5900
Copy link
Author

Alright, thanks for your efforts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants