You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implementing that would make it possible to efficiently train a lit-style clip model. Which is pretty useful when a decent visual encoder is available and the goal is to map it to captions (eg multilingual ones)
Implementing that would make it possible to efficiently train a lit-style clip model. Which is pretty useful when a decent visual encoder is available and the goal is to map it to captions (eg multilingual ones)
https://github.com/lucidrains/DALLE2-pytorch/blob/main/train_diffusion_prior.py has a good data loader for embedding+caption, this can be reused.
We may work on this, not asking for someone else to do it :)
The text was updated successfully, but these errors were encountered: