Skip to content

[ICIP 2025] Official implementation of RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement

License

Notifications You must be signed in to change notification settings

jhakrraman/rt-xnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement

Raman Jha, Adithya Lenka, Mani Ramanagopal, Aswin Sankaranarayanan, Kaushik Mitra


Paper, Supplementary Material, Web Page

Our paper has been featured at the Awesome Low Light Image Enhancement Papers List.


Model Architecture

Qualitative Results:

V-TIEE Dataset

Real-world V-TIEE Dataset: Co-located Visible-Thermal Image Pairs for HDR and Low-light Vision Research

High-gain Multi-exposure Visible-Thermal Image Pairs for Test Input Scenes


Low-gain Multi-exposure Visible-Thermal Image Pairs for Reference Scenes


1. Environment Creation

  • Make Conda Environment
conda create -n rtx-net python=3.7
conda activate rtx-net
  • Install Dependencies
conda install pytorch=1.11 torchvision cudatoolkit=11.3 -c pytorch

pip install matplotlib scikit-learn scikit-image opencv-python yacs joblib natsort h5py tqdm tensorboard

pip install einops gdown addict future lmdb numpy pyyaml requests scipy yapf lpips
  • Install BasicSR
python setup.py develop --no_cuda_ext

2. Dataset Preparation

Download the LLVIP dataset here.
Google Drive Hugging Face

You can also download the dataset directly from Hugging Face using these commands.

git lfs install
git clone https://huggingface.co/datasets/jhakrraman/LLVIP

After the download, please place it in ./data/LLVIP

Download the V-TIEE dataset here.
Google Drive Hugging Face

Similar to the LLVIP dataset, you can also download the dataset directly from Hugging Face using these commands.

git lfs install
git clone https://huggingface.co/datasets/jhakrraman/V-TIEE

The proposed dataset can either be used for testing the purpose of real-world low-light image enhancement or HDR image generation.
To test the RT-X net on the V-TIEE dataset, please make the V-TIEE dataset folder structure similar to the LLVIP dataset, and choose different types of images with various noises and illumination in real-time.

3. Training

To perform training on the RT-X Net, use the following command.

# activate the environment
conda activate rtx-net

# LLVIP
python3 basicsr/train.py --opt Options/RTxNet_LLVIP.yml

4. Testing

Download our pre-trained model of the RT-X Net from Google Drive and Hugging Face.
Put them in the folder pretrained_weights.

# activate the environment
conda activate rtx-net

# LLVIP
python3 Enhancement/test_from_dataset.py --opt Options/RTxNet_LLVIP.yml --weights pretrained_weights/LLVIP_best.pth --dataset LLVIP

Our work is based on Retinexformer. We thank the authors for releasing their code.

If you find this code or the dataset useful for you, please cite

@misc{jha2025rtxnetrgbthermalcross,  
      title={RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement},   
      author={Raman Jha and Adithya Lenka and Mani Ramanagopal and Aswin Sankaranarayanan and Kaushik Mitra},  
      year={2025},  
      eprint={2505.24705},  
      archivePrefix={arXiv},  
      primaryClass={cs.CV},  
      url={https://arxiv.org/abs/2505.24705},   
}

Releases

No releases published

Packages

No packages published

Languages