This is the official implementation of the paper: TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning based on the MTLoRA
-
Clone the repository
git clone git@github.com:Min100KM/TADFormer.git cd TADFormer -
Install requirements
- Install
PyTorch>=1.12.0andtorchvision>=0.13.0withCUDA>=11.6 - Install dependencies:
pip install -r requirements.txt
- Install
-
Running TADFormer code:
Run the code
python -m torch.distributed.launch --nproc_per_node 1 --master_port=12345 \
main.py --cfg configs/TADFormer/[config_name].yaml \
--pascal [pascal_dataset] --tasks semseg,normals,sal,human_parts \
--batch-size 32 --ckpt-freq=20 --epoch=300 --resume-backbone [Pretrained Swin Transformer .pth path] \
--disable_wandb
Eval
python -m torch.distributed.launch --nproc_per_node 1 --master_port=12345 \
main.py --cfg configs/TADFormer/[config_name].yaml \
--pascal [pascal_dataset] --tasks semseg,normals,sal,human_parts \
--batch-size 32 --ckpt-freq=20 --epoch=300 --resume [.pth path] \
--eval \
--disable_wandb
@InProceedings{Baek_2025_CVPR,
author = {Baek, Seungmin and Lee, Soyul and Jo, Hayeon and Choi, Hyesong and Min, Dongbo},
title = {TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {14858-14868}
}