Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically benchmark for speed all encoders for inference and training #224

Open
rom1504 opened this issue Nov 13, 2022 · 5 comments
Open

Comments

@rom1504
Copy link
Collaborator

rom1504 commented Nov 13, 2022

related #97

@rom1504
Copy link
Collaborator Author

rom1504 commented Nov 13, 2022

report on readme afterwards to estimate costs

@usuyama
Copy link

usuyama commented Nov 17, 2022

Love the idea!

Would be awesome to measure the memory requirements as well if possible, especially for training.

@rom1504
Copy link
Collaborator Author

rom1504 commented Nov 17, 2022

yeah, would be good to have a script to do it automatically, then it could be possible to run it on some a100

@rom1504
Copy link
Collaborator Author

rom1504 commented Nov 22, 2022

could also get memory usage

@usuyama
Copy link

usuyama commented Nov 25, 2022

Would be great to get the memory usage breakdown (A=model weights, B=forward pass), so we can estimate the memory usage of large batch size required for contrastive learning.
Estimate: A * 4 (weights, gradients, optim * 2) + B * batch_size.

I saw related threads in timm and huggingface:
huggingface/pytorch-image-models#955
pytorch/pytorch#93767

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants