List view
1. All challenge datasets are ready for training and validation; - How much importance to the training dataset (Proportiotionaly to the size: equal weights to all samples) - We train just on classes that are being evaluated (any other training setup needs to be better than that to accepted). 2. Transformer pre-trained on all datasets: - Should be better than no pertaining baseline; - Should be better than RNN baseline; 3. Training: - It should be better than an MLP baseline (with the pretraining); 4. Final prediction stage: - Better than linear stage + mean / max 5. Hierarchical last layer: - Better than Sigmoid / Softmax
Overdue by 5 year(s)•Due by July 10, 2020•4/4 issues closed