Cross-Modal Pixel-and-Stroke Representation Aligning Networks for Free-Hand Sketch Recognition (CMPS)
The code is based on Google QuickDraw-414K and TU-Berlin datasets. Thanks for the contributor, the source of QuickDraw-414K is from https://github.com/PengBoXiangShang/multigraph_transformer.
The training log can be checked in experiment/log/CMPS_sota.log.
# 1. Choose your workspace and download our repository.
cd ${CUSTOMIZED_WORKSPACE}
git clone https://github.com/WoodratTradeCo/CMPS
# 2. Enter the directory.
cd cmps
# 3. Clone our environment, and activate it.
conda-env create --name ${CUSTOMIZED_ENVIRONMENT_NAME}
conda activate ${CUSTOMIZED_ENVIRONMENT_NAME}
# 4. Download training/evaluation/testing dataset.
# 5. Train our CMPS. Please see details in our code annotations.
# Please set the input arguments based on your case.
# When the program starts running, a folder named 'experiment/${CUSTOMIZED_EXPERIMENT_NAME}' will be created automatically to save your log, checkpoint.
python train.py
--exp ${CUSTOMIZED_EXPERIMENT_NAME}
--epoch ${CUSTOMIZED_EPOCH}
--batch_size ${CUSTOMIZED_SIZE}
--num_workers ${CUSTOMIZED_NUMBER}
--gpu ${CUSTOMIZED_GPU_NUMBER}
If you find this code useful to your research, please cite our paper as the following bibtex:
@article{zhou2023cross,
title={Cross-Modal Pixel-and-Stroke Representation Aligning Networks for Free-Hand Sketch Recognition},
author={Zhou, Yang and Wang, Jin and Yang, Jingru and Ni, Ping and Lu, Guodong and Fang, Heming and Li, Zhihui and Yu, Huan and Huang, Kaixiang},
journal={Expert Systems with Applications},
pages={122505},
year={2023},
publisher={Elsevier}
}