diff --git a/README.md b/README.md index 40788af..e364ce4 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ - + @@ -22,7 +22,7 @@ ***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything](https://github.com/facebookresearch/segment-anything), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable ***Track-Anything*** to be suitable for: - Video object tracking and segmentation with shot changes. -- Visualized development and data annnotation for video object tracking and segmentation. +- Visualized development and data annotation for video object tracking and segmentation. - Object-centric downstream video tasks, such as video inpainting and editing.
@@ -39,7 +39,7 @@ - 2023/04/25: We are delighted to introduce [Caption-Anything](https://github.com/ttengwang/Caption-Anything) :writing_hand:, an inventive project from our lab that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT. -- 2023/04/20: We deployed [DEMO](https://huggingface.co/spaces/watchtowerss/Track-Anything?duplicate=trueg) on Hugging Face :hugs:! +- 2023/04/20: We deployed [DEMO](https://huggingface.co/spaces/VIPLab/Track-Anything?duplicate=true) on Hugging Face :hugs:! - 2023/04/14: We made Track-Anything public!