You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm using KANLinear in my own project. I have a problem of CUDA out of memory.
Specifically:
the model A uses a unique MLP layer (1 and only 1 MLP layer in the whole network) to map a (160, 8, 197, 197) vector in a (160, 8, 197, 197) vector.
the model B uses a unique KANLinear layer (1 and only 1 KANLinear layer in the whole network) to map a (160, 8, 197, 197) vector in a (160, 8, 197, 197) vector.
The "whole network" is a transformer based on MLP both for model A and model B.
The model A uses the 60% of the VRAM of a GPU with 24GB of VRAM, while the second model shows CUDA out of memory problem. Since the difference in the number of parameters for the two models is negligible:
the difference is equal to the difference between a single nn.Linear(197, 197) and a single KanLinear(197, 197), how is it possible to have a CUDA out of memory problem?
The text was updated successfully, but these errors were encountered:
Hi,
I'm using KANLinear in my own project. I have a problem of CUDA out of memory.
Specifically:
The "whole network" is a transformer based on MLP both for model A and model B.
The model A uses the 60% of the VRAM of a GPU with 24GB of VRAM, while the second model shows CUDA out of memory problem. Since the difference in the number of parameters for the two models is negligible:
the difference is equal to the difference between a single nn.Linear(197, 197) and a single KanLinear(197, 197), how is it possible to have a CUDA out of memory problem?
The text was updated successfully, but these errors were encountered: