#nlp
Pre-training
Transformers pre-training is used to learn the next token based on previous tokens or predict a masked token
Types of architectures
Huggingface lib ecosystem
@Wilmer, when you have little labelled text you might indeed be better off using the embeddings from sentence-transformers with kNN!
- NLP, RNN and representation
- LSTM intuition
- [[code/python/tensorflow/Transformers]]
- [[use-case-finetunning]]
- Sentence-BERT
- RoBERTa
- BERT-as-a-service
- MPNet
- GloVe
- Universal Sentence Encoder
- On the Sentence Embeddings from Pre-trained Language Models
- Doc2Vec