Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to quickly obtain test metrics? #12

Open
renliao opened this issue Nov 20, 2024 · 5 comments
Open

How to quickly obtain test metrics? #12

renliao opened this issue Nov 20, 2024 · 5 comments

Comments

@renliao
Copy link

renliao commented Nov 20, 2024

Hi, thank you for your outstanding work, I have re-train model again, when I try to test my model, I found it is very time-consuming to make a test like python main.py --yaml_path configs/hsergb_test.yaml I guess` this may be due to saving prediction results during testing, which takes a long time. If I only want to obtain test metrics, how should I modify the configuration file? thank you.

@XiangZ-0
Copy link
Owner

Hi, thanks for your feedback. I think a quick way is to comment the image-saving codes here and here. Maybe later on I can add a config parameter about this function :)

@renliao
Copy link
Author

renliao commented Nov 21, 2024

Thank you so much for your patient guidance, I have understand it.

@renliao
Copy link
Author

renliao commented Nov 21, 2024

Hello, I have commented the image-saving codes(three calls im.write() in model_interface.py), but I found it takes a long time about more than three and a half hours on a titan GPU, I using your configuration file hsergb_test.yaml without making any changes. Another issue is that under the setting of batch_size=1, the memory usage is as high as 16824MB. These two questions confuses me a lot, can you give me some advice, thank you!

@XiangZ-0
Copy link
Owner

XiangZ-0 commented Nov 21, 2024

Hi, I guess the inference speed might be limited by stacking events (I was using np.add.at function back then, not sure if there is a better way to do event stacking these days). The inference speed of the network should be fast if I remember it right. To make the evaluation faster, one possible solution might be stacking those events (i.e., generating EGER) in advance (related function is here), so we can directly load EGER during evaluation.

For the memory, I guess that might be because the input events are pretty similar to a 16-frame video clip (because we slice the events into several bins and then stack them as input). This might be the cause of high GPU memory. One possible way might be training a new model with a reduced number of event bins (like the config here), but I guess this might also degrade the performance a bit.

@renliao
Copy link
Author

renliao commented Nov 22, 2024

Thank you, I will try your suggestion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants