Roberta hidden_states[-1] == Bert pooler_output? #2321
-
I want to feed the last layer hidden state which is generated by RoberTa. out = pretrained_roberta(dummy_input["input_ids"], dummy_input["attention_mask"], output_hidden_states=True) Is that equivalent of pooler_output in Bert? pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @Yuji-github, are you using HuggingFace? In that case it might be easier to ask it in their repo, since we do not own these pretrained models. Also you seem to mention some torch in your question, is that intended? |
Beta Was this translation helpful? Give feedback.
Hi @Yuji-github, are you using HuggingFace? In that case it might be easier to ask it in their repo, since we do not own these pretrained models.
Also you seem to mention some torch in your question, is that intended?