-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference for low gpu and less number of points #13
Comments
Hi @zeynytu, our method is meant to track all pixels in a frame together. If you want to track only a few points, you have two options. Either (1) using point tracking directly Line 35 in cdee971
Please provide more information on your GPU setup, video length and spatial resolution if you need further assistance on the OOM errors. |
Actually, I have a long video, and I want to track around 50 points in specific coordinates. The length of the video is not a big deal; I can trim the video into separate parts. The GPU is nvidia 3060 ti with 8 GB vRAM. |
|
Hi @Billy-ZTB, |
Thanks! |
Hello
You have done a great work I really appriciate it !
I have been trying to run the model to track some specific points on videos but I could not figure out how to do that exactly. I tried the format
model({"video": video[None], "query_points": torch.Tensor([[[1, 15, 51]]]).cuda()},
but GPU ran out of memory. Am I doing it right or is there any other method to do this ?
The text was updated successfully, but these errors were encountered: