Skip to content

Clarification on Training and Evaluation Details for Reproducing Paper Results #95

@arghavan99

Description

@arghavan99

Thank you so much for the great work and the effort in open-sourcing your code. I’m trying to reproduce the results from your paper and would appreciate your clarification on the following points to ensure my understanding is correct:

1- Training the Universal Model:
From my understanding, the universal model was trained using PAOT_train, which includes all training datasets (including the training sets of MSD and BTCV).
This trained model was then tested on the test set of MSD, and the results are presented in Table 2. Could you confirm if this is correct?

2- Table 3 and 5-Fold Cross-Validation:
For Table 3, you mention using 5-fold cross-validation. Does this mean:
a. You first pretrained the model using all datasets except BTCV, and then fine-tuned it using a 5-fold cross-validation approach on the BTCV dataset?
b. Or, did you train five separate models, each using all other datasets combined with 4 folds of BTCV?

3- Figure 3 Approach:
Is the approach for Figure 3 similar to what was done for Table 3?

4- Purpose of PAOT_123457891213:
I noticed that some class labels (e.g., 31) are only introduced in dataset_10 (MSD). Could you explain the role of PAOT_123457891213 in the experiments?

Thank you for your time and support. I truly appreciate your help in clarifying these points!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions