Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to load the videocrafter2 model ? #11

Open
laulampaul2 opened this issue Nov 15, 2024 · 4 comments
Open

how to load the videocrafter2 model ? #11

laulampaul2 opened this issue Nov 15, 2024 · 4 comments

Comments

@laulampaul2
Copy link

In train.py:
if config.model.unet == 'videoCrafter2':
unet = UNet3DConditionModel.from_pretrained("/hpc2hdd/home/lwang592/ziyang/cache/videocrafterv2",subfolder='unet')

I wonder what is the list of the videocrafterv2 folder ? safetensors or ckpt model load in this setting ?

@ziyang1106
Copy link
Collaborator

Sorry for the late response.

You may download the model from VideoCrafter2(diffusers) and modify the corresponding path in code.

Feel free to contact me if any new problems arise.

@Fancy93
Copy link

Fancy93 commented Dec 18, 2024

Why do we still use the VideoCrafter2(diffusers) format? Can it support VideoCrafter2(ckpt) training?

@ziyang1106
Copy link
Collaborator

Why do we still use the VideoCrafter2(diffusers) format? Can it support VideoCrafter2(ckpt) training?

The diffuser format is used to quickly validate the effect of motion embedding on different pre-trained models within the existing framework. VideoCrafter2 diffusers format can be applied with a single line of code replacement. Currently, we do not have plans to develop the same algorithm in the ckpt format.

Thank you for your support.

@Fancy93
Copy link

Fancy93 commented Jan 23, 2025

Thank you very much, Can you provide other comparison codes? How do you conduct testing under a unified framework?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants