Skip to content

Latest commit

 

History

History
48 lines (29 loc) · 1.55 KB

transformers.md

File metadata and controls

48 lines (29 loc) · 1.55 KB

Transformers

Tags

#nlp

Pre-training

Transformers pre-training is used to learn the next token based on previous tokens or predict a masked token

Types of architectures

image-20211022063003111

Huggingface lib ecosystem

image-20211022064532294

@Wilmer, when you have little labelled text you might indeed be better off using the embeddings from sentence-transformers with kNN!

Resources

Related