This is the official implementation of the paper OmniSR: Shadow Removal under Direct and Indirect Lighting.
To address the challenge of shadow removal in complex indoor scenes, we propose a novel shadow removal network that considers both direct and indirect shadows, often neglected in existing datasets. Our approach leverages a high-quality synthetic dataset that includes both types of shadows, generated via path tracing. By utilizing RGB-D input and integrating semantic and geometric information, our method accurately restores shadow-free images. The network compares intensities within semantically similar regions and reweights features using local attention based on geometric and semantic similarities, without relying on shadow masks.
For more details, please refer to our original paper.
- Python 3.9
- Pytorch 2.0.1
- CUDA 11.7
pip install -r requirements.txtISTD | ISTD+ | SRD | WSRD+ | INS
Please download the corresponding pretrained model and modify the weights in testDDP.py.
You can directly test the performance of the pre-trained model as follows
- Modify the paths to dataset and pre-trained model. You need to modify the following path in the
test_DDP.py
input_dir # shadow image input path -- Line 19
result_dir # result image output paht --Line 21
weights # pretrained model path -- Line 23- Test the model
./test.shYou can change the count of GPU by change the argument --nproc_per_node in test.sh
- Download datasets and set the following structure
|-- ISTD_Dataset
|-- train
|-- origin # shadow image
|-- shadow_mask # shadow mask
|-- shadow_free # shadow-free GT
|-- depth # depth map of the original image
|-- normal # normal map of the original image
|-- test
|-- origin # shadow image
|-- shdaow_mask # shadow mask
|-- shadow_free # shadow-free GT
|-- depth # depth map of the original image
|-- normal # normal map of the original image
- Run Depth Anything V2 to generate depth map of the original image and then run
calculate_normal.pyto get normal map.After you download Depth-Anythinh-V2, replacerun.pywith with theDepth-Anything-V2_run.pyin this repository. - Then, download this weight of dinov2 and clone the code of dinov2 into this folder
- You need to modify the following terms in
option.py
train_dir # training set path
val_dir # testing set pathIf you want to train the network on 256X256 images,change the train.sh with the follow code:
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 1 --master_port 29500 ./train_DDP.py --win_size 8 --train_ps 256or you want to train on original resolution, e.g., 480X640 for ISTD:
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 1 --master_port 29500 python train.py --warmup --win_size 10 --train_ps 320The results reported in the paper are calculated by the matlab script used in previous method. Details refer to evaluation/measure_shadow.m.
The evauluation results on INS are as follows
| Method | PSNR | SSIM |
|---|---|---|
| DHAN | 27.84 | 0.963 |
| Fu et al. | 27.91 | 0.957 |
| ShadowFormer | 28.62 | 0.963 |
| DMTN | 28.83 | 0.969 |
| ShadowDiffusion | 29.12 | 0.966 |
| OmniSR (Ours) | 30.38 | 0.973 |
The testing results on dataset ISTD, ISTD+, SRD, WSRD+ and INS are: results
Our implementation is based on ShadowFormer. We would like to thank them.
Bibtex:
@inproceedings{xu2024omnisr,
title={OmniSR: Shadow Removal under Direct and Indirect Lighting},
author={Xu, Jiamin and Li, Zelong and Zheng, Yuxin and Huang, Chenyu and Gu, Renshu and Xu, Weiwei and Xu, Gang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2025}
}
If you have any questions, please contact [email protected]

