Skip to content

BlackJoke76/OmniSR

Repository files navigation

OmniSR(AAAI'2025)

This is the official implementation of the paper OmniSR: Shadow Removal under Direct and Indirect Lighting.

Introduction

To address the challenge of shadow removal in complex indoor scenes, we propose a novel shadow removal network that considers both direct and indirect shadows, often neglected in existing datasets. Our approach leverages a high-quality synthetic dataset that includes both types of shadows, generated via path tracing. By utilizing RGB-D input and integrating semantic and geometric information, our method accurately restores shadow-free images. The network compares intensities within semantically similar regions and reweights features using local attention based on geometric and semantic similarities, without relying on shadow masks.

For more details, please refer to our original paper.

Requirement

  • Python 3.9
  • Pytorch 2.0.1
  • CUDA 11.7
pip install -r requirements.txt

Datasets

Pretrained models

ISTD | ISTD+ | SRD | WSRD+ | INS

Please download the corresponding pretrained model and modify the weights in testDDP.py.

Test

You can directly test the performance of the pre-trained model as follows

  1. Modify the paths to dataset and pre-trained model. You need to modify the following path in the test_DDP.py
input_dir # shadow image input path -- Line 19
result_dir # result image output paht --Line 21
weights # pretrained model path -- Line 23
  1. Test the model
./test.sh

You can change the count of GPU by change the argument --nproc_per_node in test.sh

Train

  1. Download datasets and set the following structure
|-- ISTD_Dataset
    |-- train
        |-- origin # shadow image
        |-- shadow_mask # shadow mask
        |-- shadow_free # shadow-free GT
        |-- depth # depth map of the original image
        |-- normal # normal map of the original image
    |-- test
        |-- origin # shadow image
        |-- shdaow_mask # shadow mask
        |-- shadow_free # shadow-free GT
        |-- depth # depth map of the original image
        |-- normal # normal map of the original image

  1. Run Depth Anything V2 to generate depth map of the original image and then run calculate_normal.py to get normal map.After you download Depth-Anythinh-V2, replace run.py with with the Depth-Anything-V2_run.py in this repository.
  2. Then, download this weight of dinov2 and clone the code of dinov2 into this folder
  3. You need to modify the following terms in option.py
train_dir  # training set path
val_dir   # testing set path

If you want to train the network on 256X256 images,change the train.sh with the follow code:

CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 1 --master_port 29500 ./train_DDP.py --win_size 8 --train_ps 256

or you want to train on original resolution, e.g., 480X640 for ISTD:

CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 1 --master_port 29500 python train.py --warmup --win_size 10 --train_ps 320

Evaluation

The results reported in the paper are calculated by the matlab script used in previous method. Details refer to evaluation/measure_shadow.m.

Results

Evaluation on INS

The evauluation results on INS are as follows

Method PSNR SSIM
DHAN 27.84 0.963
Fu et al. 27.91 0.957
ShadowFormer 28.62 0.963
DMTN 28.83 0.969
ShadowDiffusion 29.12 0.966
OmniSR (Ours) 30.38 0.973

Visual Results

Testing results

The testing results on dataset ISTD, ISTD+, SRD, WSRD+ and INS are: results

References

Our implementation is based on ShadowFormer. We would like to thank them.

Citation

Bibtex:

@inproceedings{xu2024omnisr,
  title={OmniSR: Shadow Removal under Direct and Indirect Lighting},
  author={Xu, Jiamin and Li, Zelong and Zheng, Yuxin and Huang, Chenyu and Gu, Renshu and Xu, Weiwei and Xu, Gang},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2025}
}

Contact

If you have any questions, please contact [email protected]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published