You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ This repo proposes **LLaMA-Adapter (V2)**, a lightweight adaption method for fin
10
10
Try out the web demo 🤗 of LLaMA-Adapter: [](https://huggingface.co/spaces/csuhan/LLaMA-Adapter) and [LLaMA-Adapter V2](http://llama-adapter.opengvlab.com/)
11
11
12
12
## News
13
-
-**[2023.05.23]** We release the web demo of LLaMA-Adapter V2. Check out our demo at [llama-adapter.opengvlab.com](http://llama-adapter.opengvlab.com/).
13
+
-**[2023.05.23]** We release the web demo of LLaMA-Adapter V2. Check out our demo at [OpenGVLab](http://llama-adapter.opengvlab.com/)!
14
14
-**[2023.05.05]** We release the paper and code of our new work [Personalize Segment Anything](https://github.com/ZrrSkywalker/Personalize-SAM) 🔥🔥🔥, which efficiently fine-tunes Segment Anything with **10 seconds**, and improves DreamBooth for better **text-to-image generation**.
15
15
-**[2023.04.30]** We noticed that GPT-4 evaluation has a strong positional bias in favor of the first response. We will soon update the paper to reveal the position bias. GPT-4 evaluation also shows a potential to automate chatbot assessment, but still requires further research. Great thanks to [Canwen Xu](https://scholar.google.com/citations?user=oopKCDMAAAAJ&hl=en).
16
16
-**[2023.04.30]** The technical report for **LLaMA-Adapter V2** is released at [preprint](https://arxiv.org/abs/2304.15010).
0 commit comments