You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm seeking some clarification on the problem setup. To my understanding, when specifying a subset, if I assign a weight > 1 to a particular datapoint, it can appear multiple times in the rewritten dataset. This duplication may result in the same datapoint appearing twice in the same batch during contrastive training, potentially degrading performance (as the same datapoint would be contrasted against another copy of itself).
Do you have any mechanisms or suggestions within DataComp to help detect or handle these duplicate datapoints? If not, how would you recommend mitigating potential issues caused by having duplicates in the final dataset?
Thank you in advance for your guidance!
The text was updated successfully, but these errors were encountered:
Hello DataComp team!
I'm seeking some clarification on the problem setup. To my understanding, when specifying a subset, if I assign a weight > 1 to a particular datapoint, it can appear multiple times in the rewritten dataset. This duplication may result in the same datapoint appearing twice in the same batch during contrastive training, potentially degrading performance (as the same datapoint would be contrasted against another copy of itself).
Do you have any mechanisms or suggestions within DataComp to help detect or handle these duplicate datapoints? If not, how would you recommend mitigating potential issues caused by having duplicates in the final dataset?
Thank you in advance for your guidance!
The text was updated successfully, but these errors were encountered: