Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training results are quite different from the paper #42

Open
mrmusad opened this issue Nov 10, 2024 · 2 comments
Open

The training results are quite different from the paper #42

mrmusad opened this issue Nov 10, 2024 · 2 comments

Comments

@mrmusad
Copy link

mrmusad commented Nov 10, 2024

Because I only have one graphics card, I changed the learning rate to 1/8 of the source file, 0.01, and kept other training parameters unchanged. I trained the MASA-gdino model and used the BDDMOT dataset for testing. the training results I got are as follows. I only used the first 10 tar compressed files of sa-1b for training, and did not use the sa-1b-500k dataset used in the paper. Could these errors be due to the dataset?
image

@mrmusad
Copy link
Author

mrmusad commented Nov 10, 2024

The TETA in the paper is 54.5 and the IDF1 is 71.7, which is somewhat different from the results I obtained through training.

@siyuanliii
Copy link
Owner

Thanks for the question. There are many possibilities that can lead to the performance gap. Before we dig into the effect of different training images, there are some easier things to check. For example, what hyperparameters did you use for your tracker when testing on BDD100K? What detections do you use? What is the performance on the TAO dataset? I can better help you if we can have more info here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants