Replies: 1 comment
-
I think having an example like @RKhobrag mentions would really give insight and interest into EasyLM and the OpenLLaMA initiatives you are undertaking. 👍 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have 2 questions regarding fine-tuning,
To fine-tune model like OpenLlama 7B, what is a decent size of training corpus needed? And does the data have to be in the instruction-style format or can it be plain text corpus?
The fine-tuning trains the model using what task exactly?
Thank you for all the help, and for taking this initiative!
Beta Was this translation helpful? Give feedback.
All reactions