-
Notifications
You must be signed in to change notification settings - Fork 10.5k
implemented generic type validation for all nodes. #4149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 1 commit
5749d34
40a86b3
5c5a333
593d69d
6b36272
d4ebb8f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -13,6 +13,30 @@ | |
|
|
||
| import comfy.model_management | ||
|
|
||
| def force_bhw3(image): | ||
| #convert [CHW, BCHW, CWH] to BHW3 | ||
| was_list = False | ||
|
|
||
| while isinstance(image, list): | ||
| was_list = True | ||
| image = image[0] | ||
|
|
||
| if len(image.shape) == 3: | ||
| #add batch dimension | ||
| image = image.unsqueeze(0) | ||
|
|
||
| if image.shape[1] == 3: | ||
|
||
| #BCHW color | ||
| image = image.permute(0, 2, 3, 1) | ||
| return image if not was_list else [image] | ||
|
|
||
| if image.shape[1] == 1: | ||
| #BCWH black and white | ||
| image = image.permute(0, 3, 2, 1).expand(-1, -1, -1, 3) | ||
| return image if not was_list else [image] | ||
|
|
||
| return image if not was_list else [image] | ||
|
|
||
| def get_input_data(inputs, class_def, unique_id, outputs={}, prompt={}, extra_data={}): | ||
| valid_inputs = class_def.INPUT_TYPES() | ||
| input_data_all = {} | ||
|
|
@@ -42,6 +66,14 @@ def get_input_data(inputs, class_def, unique_id, outputs={}, prompt={}, extra_da | |
| return input_data_all | ||
|
|
||
| def map_node_over_list(obj, input_data_all, func, allow_interrupt=False): | ||
| #Ensure image inputs are in BHW3 format | ||
| input_types = obj.INPUT_TYPES() | ||
| for _, v in input_types.items(): | ||
| if isinstance(v, dict): | ||
| for k2, v2 in v.items(): | ||
| if v2[0] == "IMAGE": | ||
| input_data_all[k2] = [force_bhw3(x) for x in input_data_all[k2]] | ||
|
||
|
|
||
| # check if node wants the lists | ||
| input_is_list = False | ||
| if hasattr(obj, "INPUT_IS_LIST"): | ||
|
|
@@ -73,6 +105,26 @@ def slice_dict(d, i): | |
| if allow_interrupt: | ||
| nodes.before_node_execution() | ||
| results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) | ||
|
|
||
| #Ensure IMAGE outputs conform to BHWC | ||
| return_indexs = {} | ||
| formated_results = [] | ||
shawnington marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| if hasattr(obj, "RETURN_NAMES") and hasattr(obj, "RETURN_TYPES"): | ||
| for i, t in enumerate(obj.RETURN_TYPES): | ||
| return_indexs[i] = t | ||
|
|
||
| for i, r in enumerate(results[0]): | ||
| if return_indexs[i] == "IMAGE": | ||
shawnington marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| print(f"Result: {force_bhw3(r).shape}") | ||
| formated_results.append(force_bhw3(r)) | ||
| else: | ||
| formated_results.append(r) | ||
|
|
||
| results = [tuple(formated_results)] | ||
|
|
||
| del formated_results | ||
|
|
||
| return results | ||
|
|
||
| def get_output_data(obj, input_data_all): | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this only going to fix the first image in the list? If we get a list of images, we should be fixing all the images in the list (which may all have different dimensions). (I would probably make this change outside of the call to
force_bhw3so that it applies to any other type validation we add in the future.)Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sometimes the image tensor was wrapped in an extraneous list, it did not seem to be related to batch size in any way, its just a difference in formatting from the ways it comes for input, and the way its structured for output. I have no idea why it's like that, but the data does not change between.
Granted it's a very ugly hack, could have gone for an
ifinstead of awhile. I just really really wanted to get rid of the list wrapper before you know, I added it back for formatting reasons at the end if it was removed.I could be wrong. That whole function is likely to evolve considerably as I start to expose it to a wide variety of edge cases, such as 3x3x3x3, and also take into consideration the suggestions and further discord discussion we have had.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not frequently used, but ComfyUI does have functionality for "list" outputs/inputs that are different than batches. Specifically, if a node returns a batch of 5 images, those 5 images will all be passed to the following node for one execution. If a node returns a list of 5 images (each with a batch size of 1), the following node will actually have its execution function called 5 times. In order to support this, outputs are usually passed around wrapped in a list. I believe that's what you were seeing.
To continue to support that functionality, it's important that we process each entry in the list the same way. Someone in Discord/Matrix might have suggestions for real nodes that make use of that functionality so you can test it. I think some of the nodes used for making X/Y plots use it.