This work implements an efficient framework for ranking pre-trained models.
install the requirements:
pip install -r requirements.txt
If you want the trained models in parc, they are available here: All Trained Models.
- You can check the
dataset.py
to download all the datasets. - You can check the
demo.py
to generate all the forward features:./cache/probes/fixed_budget_500/.....pkl
.
You can also download the forward features of default 500 samples from here (Recommended): 500_probe_set
# note probe_only=True, budget is the size of probe sets.
experiment = Experiment(my_methods, name='test', append=False, budget=500, probe_only=True)
- You can check the
feature_extractor.py
to generate all clip features.
See demo.py
for an example of how to perform evaluation:
# All baselines:
python demo.py && python metrics.py
# ours:
python meta_features_plus.py --weight 0.5 --pca_dim 32 --k 5 --alpha 0.0001 --reg 0 --iteration 1000 --seed 2023 --no_completion_rebuilding 'FDA' --proxy_model 'clip' && python metric_cold.py
You can add your baseline methods in the methods.py
. We have open-sourced the implementations of baseline methods.
Fennec benchmark is extended from parc benchmark by including more models and baselines.
@inproceedings{parc-neurips2021,
author = {Daniel Bolya and Rohit Mittapalli and Judy Hoffman},
title = {Scalable Diverse Model Selection for Accessible Transfer Learning},
booktitle = {NeurIPS},
year = {2021},
}
@misc{bai2024pretrainedmodelrecommendationdownstream,
title={Pre-Trained Model Recommendation for Downstream Fine-tuning},
author={Jiameng Bai and Sai Wu and Jie Song and Junbo Zhao and Gang Chen},
year={2024},
eprint={2403.06382},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2403.06382},
}