You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
aedca6f adds a "working" (i.e., doesn't error) BERT model, but it doesn't seem to learn very well. Among the design considerations:
wiring up the actual config parameters in the YAML file so they do something. Not sure how much flexibility we have with the pre-trained HuggingFace models, but at the very least we shouldn't have extraneous options
it seems best (for training time) to freeze the layers of the BERT encoder, but maybe this should be a user-configurable option as well?
It seems that the positional encodings built into the HuggingFace BERT models are not useful in a sequence to sequence context. I'm not really sure why this is, but it is fixable if we add our own positional encodings to the embedding layer of the pretrained models.
Would be nice to have BERT an option for the encoder. Some issues are:
Field
s we've been using.vocabulary
target
vocabulary, since BERT's tokenizer might do weird things to it?The text was updated successfully, but these errors were encountered: