Francesco Girlanda1 · Denys Rozumnyi1 · Marc Pollefeys1,2 · Martin R. Oswald1,3
1 ETH Zurich, 2 Microsoft, 3 University of Amsterdam
Deblur-SLAM can successfully track the camera and reconstruct sharp maps for highly motion-blurred sequences. We directly model motion blur, which enables us to achieve high-quality reconstructions, both on challenging synthetic (top) and real (bottom) data.
Deblur-SLAM Architecture. Given an RGB input stream, we estimate an initial pose through local bundle adjustment (BA) using joint Disparity, Scale and Pose Optimization (DSPO). This pose is later refined through frame-to-model tracking that learns a sub-frame trajectory. Each keyframe is then mapped, taking advantage of the estimated monocular depth. The sub-frame trajectory is applied to render virtual sharp images, which model the physical image formation of blurry images. We optimize the photometric and geometric error between the observed blurry image and the average of our sharp images. We further refine poses globally via online loop closure, global BA, and a deformable 3D Gaussian map that adjusts for global pose and depth updates before each mapping phase.
Table of Contents
- Clone the repo using the
--recursive
flag
git clone --recursive https://github.com/FraGirla/Deblur-SLAM.git
cd Deblur-SLAM
- Creating a new conda environment.
conda create --name deblur-slam python=3.10
conda activate deblur-slam
- Install CUDA 11.7 using conda and pytorch 1.12
conda install conda-forge::cudatoolkit-dev=11.7.0
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
Now make sure that "which python" points to the correct python executable. Also test that cuda is available python -c "import torch; print(torch.cuda.is_available())"
- Update depth rendering hyperparameter in thirparty library
By default, the gaussian rasterizer does not render gaussians that are closer than 0.2 (meters) in front of the camera. In our monocular setting, where the global scale is ambiguous, this can lead to issues during rendering. Therefore, we adjust this threshold to 0.001 instead of 0.2. Change the value at this line, i.e. it should read
if (p_view.z <= 0.001f)// || ((p_proj.x < -1.3 || p_proj.x > 1.3 || p_proj.y < -1.3 || p_proj.y > 1.3)))
- Install the remaining dependencies.
python -m pip install -e thirdparty/lietorch/
python -m pip install -e thirdparty/diff-gaussian-rasterization-w-pose/
python -m pip install -e thirdparty/simple-knn/
python -m pip install -e thirdparty/evaluate_3d_reconstruction_lib/
- Check installation.
python -c "import torch; import lietorch; import simple_knn; import
diff_gaussian_rasterization; print(torch.cuda.is_available())"
- Now install the droid backends and the other requirements
python -m pip install -e .
python -m pip install -r requirements.txt
python -m pip install pytorch-lightning==1.9 --no-deps
- Download pretrained model.
Download the pretained models from Google Drive, unzip them inside the
pretrained
folder. Themiddle_fine.pt
decoder will not be used and can be removed.
[Directory structure of pretrained (click to expand)]
.
└── pretrained
├── .gitkeep
├── droid.pth
├── middle_fine.pt
└── omnidata_dpt_depth_v2.ckpt
Our sythetically blurred dataset based on Replica will be released soon.
For running Deblur-SLAM, each scene has a config folder, where the input_folder
,output
paths need to be specified.
python run.py configs/ReplicaBlurry/office0.yaml
Our codebase is partially based on Splat-SLAM, BAD-Gaussians, GlORIE-SLAM, GO-SLAM, DROID-SLAM and MonoGS. We thank the authors for making these codebases publicly available. Our work would not have been possible without your great efforts!
There may be minor differences between the released codebase and the results reported in the paper. Further, we note that the GPU hardware has an influence, despite running the same seed and conda environment.
If you find our code or paper useful, please cite
@misc{girlanda2025deblurgaussiansplattingslam,
title={Deblur Gaussian Splatting SLAM},
author={Francesco Girlanda and Denys Rozumnyi and Marc Pollefeys and Martin R. Oswald},
year={2025},
eprint={2503.12572},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.12572},
}