Skip to content

Commit d3531d1

Browse files
author
Spiros Thermos
authored
models branch, index.md updated
1 parent 12b1da7 commit d3531d1

File tree

1 file changed

+2
-63
lines changed

1 file changed

+2
-63
lines changed

README.md

Lines changed: 2 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,5 @@
11
# Self-supervised Deep Depth Denoising
2-
Created by [Vladimiros Sterzentsenko](https://github.com/vladsterz)__\*__, [Leonidas Saroglou](https://www.iti.gr/iti/people/Leonidas_Saroglou.html)__\*__, [Anargyros Chatzitofis](https://github.com/tofis)__\*__, [Spyridon Thermos](https://github.com/spthermo)__\*__, [Nikolaos](https://github.com/zokin) [Zioulis](https://github.com/zuru)__\*__, [Alexandros Doumanoglou](https://www.iti.gr/iti/people/Alexandros_Doumanoglou.html), [Dimitrios Zarpalas](https://www.iti.gr/iti/people/Dimitrios_Zarpalas.html), and [Petros Daras](https://www.iti.gr/iti/people/Petros_Daras.html) from the [Visual Computing Lab](https://vcl.iti.gr) @ CERTH
32

4-
![poisson](./assets/images/poisson.jpg)
3+
**Project page:*** [https://vcl3d.github.io/DeepDepthDenoising](https://vcl3d.github.io/DeepDepthDenoising)
54

6-
# About this repo
7-
This repo includes the training and evaluation scripts for the fully convolutional autoencoder presented in our paper ["Self-Supervised Deep Depth Denoising"](https://arxiv.org/pdf/1909.01193.pdf) (to appear in [ICCV 2019](http://iccv2019.thecvf.com/)). The autoencoder is trained in a self-supervised manner, exploiting RGB-D data captured by Intel RealSense D415 sensors. During inference, the model is used for depthmap denoising, without the need of RGB data.
8-
9-
# Installation
10-
The code has been tested with the following setup:
11-
* Pytorch 1.0.1
12-
* Python 3.7.2
13-
* CUDA 9.1
14-
* [Visdom](https://github.com/facebookresearch/visdom)
15-
16-
# Model Architecture
17-
18-
![network](./assets/images/network.png)
19-
20-
**Encoder**: 9 CONV layers, input is downsampled 3 times prior to the latent space, number of channels doubled after each downsampling.
21-
22-
**Bottleneck**: 2 residual blocks, ELU-CONV-ELU-CONV structure, pre-activation.
23-
24-
**Decoder**: 9 CONV layers, input is upsampled 3 times using interpolation followed by a CONV layer.
25-
26-
# Train
27-
To see the available training parameters:
28-
29-
```python train.py -h```
30-
31-
Training example:
32-
33-
```python train.py --batchsize 2 --epochs 20 --lr 0.00002 --visdom --visdom_iters 500 --disp_iters 10 --train_path /path/to/train/set```
34-
35-
# Inference
36-
Download a pretrained model from [here](https://drive.google.com/drive/folders/15HIJrHiuqfE37v0_d-m-k5RP8UJJXmvm?usp=sharing)
37-
* ddd --> trained with multi-view supervision (as presented in the paper):
38-
* ddd_ae --> same model architecture, no multi-view supervision (for comparison purposes)
39-
40-
To denoise a RealSense sample using a pretrained model:
41-
42-
```python inference.py --model_path /path/to/pretrained/model --input_path /path/to/noisy/sample --output_path /path/to/save/denoised/sample```
43-
44-
In order to save the input (noisy) and the output (denoised) samples as pointclouds add the following flag to the inference script execution:
45-
46-
```--pointclouds True```
47-
48-
To denoise a sample using the pretrained autoencoder (same model trained without splatting) add the following flag to the inference script (and make sure you load the "ddd_ae" model):
49-
50-
```--autoencoder True```
51-
52-
**Benchmarking:** the mean inference time on a GeForce GTX 1080 GPU is **11ms**.
53-
54-
# Citation
55-
If you use this code and/or models, please cite the following:
56-
```
57-
@inproceedings{sterzentsenko2019denoising,
58-
author = "Vladimiros Sterzentsenko and Leonidas Saroglou and Anargyros Chatzitofis and Spyridon Thermos and Nikolaos Zioulis and Alexandros Doumanoglou and Dimitrios Zarpalas and Petros Daras",
59-
title = "Self-Supervised Deep Depth Denoising",
60-
booktitle = "ICCV",
61-
year = "2019"
62-
}
63-
```
64-
65-
# License
66-
Our code is released under MIT License (see LICENSE file for details)
5+
**Source code:*** [https://github.com/VCL3D/DeepDepthDenoising](https://github.com/VCL3D/DeepDepthDenoising)

0 commit comments

Comments
 (0)