Skip to content

The results of ID CIFAR100-> OOD CIFAR10 #1

@ZhimaoPeng

Description

@ZhimaoPeng

After Fine-tuning on the CIFAR100 with the model that are self-supervised pretrained and then intermediate fine-tuned on ImageNet-22k, I got the 87.169 AUROC for ID CIFAR100-> OOD CIFAR10, this is a significant difference from the 98.3 AUROC reported in the paper, How can I get normal results?
The command line I ran and the results are shown below:

command line:

OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=2 run_class_finetuning.py --model beit_base_patch16_224 --data_path /home/ubuntu/code/open-set/MOOD --data_set cifar100 --nb_classes 100 --disable_eval_during_finetuning --finetune /home/ubuntu/code/open-set/MOOD/beit_base_patch16_224_pt22k_ft22k.pth --output_dir logs_cifar100_test --batch_size 128 --lr 1.5e-3 --update_freq 1 --warmup_epochs 5 --epochs 90 --layer_decay 0.65 --drop_path 0.2 --weight_decay 0.05 --layer_scale_init_value 0.1 --clip_grad 3.0

results:

results_test

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions