Awesome! This is truly a very meaningful and important work! And I have a few questions here.
What is the exact backbone of this model? meta-llama/Meta-Llama-3-8B or meta-llama/Meta-Llama-3-8B-Instruct or any more recent Llama models? and what prompting template did you use to fine-tune this model (or i.e. what prompting template should one use to interact with the model?) The same as the Meta-Llama-3-8B or was it fine-tuned without a prompt template, analogous to the first MeLlama version?
Looking forward to your reply, thank you very much.