-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to fine tune train the existing model with my own artist? #43
Comments
The easiest approach would be to train your own top-level prior on a new dataset. In theory, if you have enough VRAM/GPUs, you could finetune from our pretrained top-level priors, but it is going to be a lot of work possibly involving a fair bit of code change/model surgery. 1B top-level training fits on a single GPU with gradient checkpointing (enabled with |
@heewooj Thank you so much! I'll try training a top-level prior. How much data is recommended? And would a free Colab GPU be sufficient for this or is this something that would require spending some money to train? |
A way to fine-tune from our models would be to add a new embedding(s) for your new artist(s), and initialise them from the aritst_id = 0 ie "unknown" artist embedding. |
^ 👍 also, this function has to be implemented if you'd like to enable |
@prafullasd @heewooj I'll read more on this and give it a shot. Thanks guys! |
We've updated the instructions on how to finetune from 1b_lyrics or train from scratch. Hope it helps! |
@heewooj Wonderful! You guys are amazing. |
related - #40 |
Thanks a lot for all this support! |
For Prior Level 2 training? |
I'd like to use the model but would like to fine-tune to my own custom artist (which I don't know if they're in the dataset), e.g. Pavarotti. There are others I'd like to try this with too.
How could I go about that? Is there a way to do that through the provided colab link?
The text was updated successfully, but these errors were encountered: