You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the examples (and evals on honesty-eval branch), when load_model is called in the setup of the notebook, the "revision" parameter is only passed to the model from_pretrained, but not the tokenizer from_pretrained. I doubt this makes a difference - probably the tokenizer is the same for the revision as for the main branch for the models I've used - but nevertheless, should be added.
In the examples (and evals on honesty-eval branch), when load_model is called in the setup of the notebook, the "revision" parameter is only passed to the model from_pretrained, but not the tokenizer from_pretrained. I doubt this makes a difference - probably the tokenizer is the same for the revision as for the main branch for the models I've used - but nevertheless, should be added.
Should add what is in bold.
def load_model(model_name_or_path, revision, device):
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path, device_map=device, revision=revision, trust_remote_code=False)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, padding_side="left", revision=revision)
tokenizer.pad_token_id = 0
return model, tokenizer
The text was updated successfully, but these errors were encountered: