This repository provides the code for our proposed SPG in our ICML 2023 paper.
We use 9 datasets in the paper. To reproduce the results, some of these datasets need to be prepared manually.
You do not need do anything for these datasets as they will be automatically downloaded.
You can download the datasets from the official site.
- Download the Tiny ImageNet file.
- Extract the file, and place them as follows.
data/tiny-imagenet-200/
|- train/
| |- n01443537/
| |- n01629819/
| +- ...
+- val/
|- val_annotations.txt
+- images/
|- val_0.JPEG
|- val_1.JPEG
+- ...
- Run
prep_tinyimagenet.pyto reorganise files so thattorchvision.datasets.ImageFoldercan read them. - Make sure you see the structure as follows.
data/tiny-imagenet-200/ |- test/ | |- n01443537/ | |- n01629819/ | +- ... |- train/ | |- n01443537/ | |- n01629819/ | +- ... +- val/ +- # These files are not used any more.
You can download the datasets from the official site.
- Download the downsampled image data (32x32).
- Extract the files, and place the extracted files under
./data/imagenet/. - Make sure you see the structure as follows.
data/imagenet/ |- test/ | +- val_data +- train/ |- train_data_batch_1 |- ... +- train_data_batch_10
- Follow the instruction to create data.
- Place the raw images under
data/fceleba/raw/img_align_celeba/. - Make sure you see the structure as follows.
data/fceleba/
|- iid/
| |- test/
| | +- all_data_iid_01_0_keep_5_test_9.json
| +- train/
| +- all_data_iid_01_0_keep_5_train_9.json
+- raw/
+- img_align_celeba/
|- 000001.jpg
|- 000002.jpg
+- ...
- Follow the instruction to create data.
- Place the raw images under
data/femnist/raw/train/anddata/femnist/raw/test/. - Make sure you see the structure as follows.
data/femnist/
+- raw/
|- test
| |- all_data_0_iid_01_0_keep_0_test_9.json
| |- ...
| +- all_data_34_iid_01_0_keep_0_test_9.json
+- train
|- all_data_0_iid_01_0_keep_0_train_9.json
|- ...
+- all_data_34_iid_01_0_keep_0_train_9.json
Experiments can be reproduced by running
python3 main.py appr=spg seq=<seq> with specifying <seq> for a dataset you want to run.
For <seq>, you can choose one from the following datasets.
cifar100_10for C-10 (CIFAR100 with 10 tasks)cifar100_20for C-20tinyimagenet_10for T-10 (TinyImageNet with 10 tasks)tinyimagenet_20for T-20imagenet_100for I-100 (ImageNet with 100 tasks)fceleba_10for FC-10 (Federated CelebA with 10 tasks)fceleba_20for FC-20femnist_10for FE-10 (Federated EMNIST with 10 tasks)femnist_20for FE-20
As CIFAR100-based datasets will be automatically downloaded by PyTorch, you can test C-10 or C-20 right now by running
python3 main.py appr=spg seq=cifar100_10 # or seq=cifar100_20We would appreciate if if you could refer to our paper as one of your baselines!
@inproceedings{konishi2023spg,
title={{Parameter-Level Soft-Masking for Continual Learning}},
author={Konishi, Tatsuya and Kurokawa, Mori and Ono, Chihiro and Ke, Zixuan and Kim, Gyuhak and Liu, Bing},
booktitle={Proc. of ICML},
year={2023},
}