From 7fe5b87534fbca3b46675cf4422f6b00532e597f Mon Sep 17 00:00:00 2001 From: Ikko Ashimine Date: Fri, 9 Dec 2022 06:41:44 +0900 Subject: [PATCH] Hugging Face typo (#275) huggingface -> Hugging Face --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7d80cfc45..40db96d32 100644 --- a/README.md +++ b/README.md @@ -248,7 +248,7 @@ python -m training.main \ ### Training with pre-trained language models as text encoder: -If you wish to use different language models as the text encoder for CLIP you can do so by using one of the huggingface model configs in ```src/open_clip/model_configs``` and passing in it's tokenizer as the ```--model``` and ```--hf-tokenizer-name``` parameters respectively. Currently we only support RoBERTa ("test-roberta" config), however adding new models should be trivial. You can also determine how many layers, from the end, to leave unfrozen with the ```--lock-text-unlocked-layers``` parameter. Here's an example command to train CLIP with the RoBERTa LM that has it's last 10 layers unfrozen: +If you wish to use different language models as the text encoder for CLIP you can do so by using one of the Hugging Face model configs in ```src/open_clip/model_configs``` and passing in it's tokenizer as the ```--model``` and ```--hf-tokenizer-name``` parameters respectively. Currently we only support RoBERTa ("test-roberta" config), however adding new models should be trivial. You can also determine how many layers, from the end, to leave unfrozen with the ```--lock-text-unlocked-layers``` parameter. Here's an example command to train CLIP with the RoBERTa LM that has it's last 10 layers unfrozen: ```bash python -m training.main \ --train-data="pipe:aws s3 cp s3://s-mas/cc3m/{00000..00329}.tar -" \