PyTorch implementation of paper DetRF: Detachable Novel Views Synthesis of Dynamic Scenes Using Backdrop-Driven Neural Radiance Fields (AAAI2025).
The code is trained with Python == 3.8.8, Pytorch == 1.11.0 and CUDA == 11.3, the dependencies include:
- scikit-image
- opencv
- imageio
- cupy
- kornia
- configargparse
Then download NVIDIA Dynamic and Urban Driving datasets. The whole file structure should be:
D4NeRF
├── configs
├── logs
├── models
├── data
| └── NVIDIA
| └── URBAN
| └── others
...
python train.py --config configs/config_Handcart.txt
The evaluation on NVIDIA dataset focuses on synthesis across different viewpoints, while evaluation on Urban driving dataset aims to interpolate time intervals (frames).
Evaluation on Urban Driving Scenes
python evaluation_NV.py --config configs/config_Balloon1.txt
Evaluation on NVIDIA Dynamic Scenes
python evaluation_urban.py --config configs/config_Handcart.txt
fixed time and view interpolation:
python view_render.py --config configs/config_Handcart.txt --fixed_time --target_idx 15
time interpolation and fixed view:
python view_render.py --config configs/config_Handcart.txt --fixed_view --target_idx 15
time interpolation and view interpolation:
python view_render.py --config configs/config_Handcart.txt --no_fixed --target_idx 15
Use COLMAP to acquire camera poses and intrinsics. Then download scripts to obtain the flow and depth estimation models, RAFT and Midas. The pre-trained weights have been added to the directory.
Pose transformation
python save_poses_nerf.py --data_path "/xxx/dense" #data_path is the path of COLMAP estimation results.
Depth estimation
python run_midas.py --data_path "/xxx/dense" --resize_height 272
Flow estimation
python run_flows_video.py --model models/raft-things.pth --data_path /xxx/dense
The code is built upon:
Thanks for their great work.