-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about coreset selection #2
Comments
Thanks for the question and your interest in our work! We believe that this comparison is reasonable as all the coreset method has the same "coreset data fraction per epoch". Since our target is to improve the training efficiency, the training time reduction is the same across different methods. In addition, previous work [1][2] also adopts a similar adaptive coreset strategy and compares it with other fixed-coreset methods. |
Thanks for your reply!! |
Thanks for your question. Different from the GraNd score proposed in EL2N, the expectation of our ACS is computed on all logits |
Got that! Thanks for the reply! |
Thank you for sharing your excellent work! I have a question about coreset selection. I noticed that in Algorithm 1, all the samples are re-sorted according to dACS and then reconstituted in the subset. It appears that the coreset selection is dynamic, akin to dropping out some unimportant samples during the training phase (the dropped ones can be reselected). However, some of the comparison methods are static (the dropping is permanent). Is the comparison reasonable?
I'm looking forward to your reply!
The text was updated successfully, but these errors were encountered: