Skip to content

Commit 294b857

Browse files
authored
Update README
1 parent ec3e758 commit 294b857

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55

66
> [**PromptKD: Unsupervised Prompt Distillation for Vision-Language Models**]() <br>
77
> Zheng Li, Xiang Li*, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang*. <br>
8+
> Nankai University, Ant Group, RIKEN <br>
89
> CVPR 2024 <br>
910
> [[Paper](https://arxiv.org/abs/2403.02781)] [[Project Page](https://zhengli97.github.io/PromptKD)] [[中文解读](https://zhengli97.github.io/PromptKD/chinese_interpertation.html)]
1011
@@ -93,7 +94,7 @@ In our paper, we default use PromptSRC to pre-train our ViT-L/14 CLIP teacher mo
9394
If your want to train our own teacher model, first you should change `scripts/promptsrc/base2new_train.sh line 11 CFG=vit_b16_c2_ep20_batch4_4+4ctx` to `vit_l14_c2_ep20_batch8_4+4ctx`.
9495
Then follow the instructions listed in `docs/PromptSRC.md` and run the script.
9596

96-
**Important Note:**
97+
**Important Note:**
9798
The accuracy of your own teacher model may vary depending on your computing environment. To ensure that your teacher model is adequate for distillation, please refer to Appendix Table 10 to check whether your model achieves appropriate accuracy.
9899

99100
If your teacher model cannot achieve the corresponding accuracy or cannot be trained due to computational constraints, I highly recommend that you use our publicly available pre-trained models for distillation.
@@ -169,7 +170,6 @@ If you find our paper or repo is helpful for your research, please consider citi
169170
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
170171
year={2024}
171172
}
172-
173173
```
174174

175175
## Acknowledgements

0 commit comments

Comments
 (0)