Skip to content

Commit

Permalink
Update README.md (#519)
Browse files Browse the repository at this point in the history
make readme consistent in terms of model size
  • Loading branch information
mitchellnw authored May 4, 2023
1 parent 648d311 commit b58b013
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ We have trained the following ViT CLIP models:
* ViT-B/16 on LAION-2B with a accuracy of **70.2%**.
* ViT-L/14 on LAION-400M with an accuracy of **72.77%**, vs OpenAI's **75.5%** (as measured here, 75.3% in paper)
* ViT-L/14 on LAION-2B with an accuracy of **75.3%**, vs OpenAI's **75.5%** (as measured here, 75.3% in paper)
* ViT-L/14 on [DataComp-1B](https://github.com/mlfoundations/datacomp) with an accuracy of **79.2**. 13B samples seen schedule.
* CoCa ViT-L/14 on LAION-2B with an accuracy of **75.5%** (currently only 13B samples seen) vs. CLIP ViT-L/14 73.1% (on the same dataset and samples seen)
* ViT-H/14 on LAION-2B with an accuracy of **78.0**. The second best in1k zero-shot for released, open-source weights thus far.
* ViT-g/14 on LAION-2B with an accuracy of **76.6**. This was trained on reduced 12B samples seen schedule, same samples seen as 400M models.
* ViT-g/14 on LAION-2B with an accuracy of **78.5**. Full 34B samples seen schedule.
* ViT-L/14 on [DataComp-1B](https://github.com/mlfoundations/datacomp) with an accuracy of **79.2**. 13B samples seen schedule.
* ViT-G/14 on LAION-2B with an accuracy of **80.1**. The best in1k zero-shot for released, open-source weights thus far.

And the following ConvNeXt CLIP models:
Expand Down

0 comments on commit b58b013

Please sign in to comment.