Skip to content

[NeurIPS'2024] Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly

Notifications You must be signed in to change notification settings

junshengzhou/DeepPriorAssembly

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly

Junsheng Zhou · Yu-Shen Liu · Zhizhong Han

NeurIPS 2024

We present deep prior assembly, a novel framework that assembles diverse deep priors from large models for scene reconstruction from single images in a zero-shot manner. We will release the code of the paper in this repository.

Overview

Overview of DeepPriorAssembly. Given a single image of a 3D scene, we detect the instances and segment them with Grounded-SAM. After normalizing the size and center for the instances, we attempt to amend the quality of the instance images by enhancing and inpainting them. Here, we take a sofa in the image for example. Leveraging the Stable-Diffusion model, we generate a set of candidate images with the image-to-image generation and a text prompt of the instance category predicted by Grounded-SAM. We then filter out the bad generation samples with Open-CLIP by evaluating the cosine similarity between the generated instances and original one. After that, we generate multiple 3D model proposals for this instance with Shap·E from the Top-K generated instance images. Additionally, we estimate the depth of the origin input image with Omnidata as a 3D geometry prior. To estimate the layout, we propose an approach to optimize the location, orientation and size for each 3D proposal by matching it with the estimated segmentation masks and the depths (the red for the example sofa). Finally, we choose the 3D model proposal with minimal matching error as the final prediction of this instance, and the final scene is generated by combining the generated 3D models for all detected instances.

Visual Comparisons

More Comparisons under 3D-Front

More Comparisons under BlendSwap and Replica

More Comparisons under ScanNet

Installation

Clone this repository and install the required packages:

git clone https://github.com/junshengzhou/DeepPriorAssembly
cd DeepPriorAssembly

conda create -n dpa python=3.11
conda activate dpa
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia

Run DeepPriorAssembly:

You should then install the required packages of the large vision models we used: Grounded-SAM, Stable-Diffusion, DUST3R and Shap-E. After that, you can start using DeepPriorAssembly by running:

bash run.sh

Easier to run the demo:

If you would just like to have a try on DeepPriorAssembly but do not want to install the complex environment, you can download the prepared demo data in this link, put it in data/outputs and simply run the following command for a scene registration:

python registration/optimization_5dof.py --image_id 479d2d66-4d1a-47ca-a023-4286fc547301---rgb_0017 --geometry_dir data/outputs/geometry --mask_dir data/outputs/segmentation --object_dir  data/outputs/object_generation --output_dir data/outputs/final_registration

The results will be saved in data/outputs/final_registration.

Notice that:

  • We use the DUST3R model to estimate the geometry in the released code. The adjustment is to make the code easier to use by reducing the requirement of camera intrinsics and the ground truth depths. If you would like to use the original Omnidata for geometry estimation, you can 1) follow the repo for depth estimation, 2) use ground truth depths for solving depth scale and shift, and 3) project the depth to 3D space using camera intrinsics.

  • By default, we use a 5dof registration model since the image of Front-3D is captured parallelly to the ground. You can run python optimization_7dof.py in the run.sh for a 7dof registration.

  • If you would like to use our code, please remember to check the LICENCES of the assembled large vision models in their codebases.

Citation

If you find our code or paper useful, please consider citing

  @inproceedings{zhou2024deep,
      title = {Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly},
      author = {Zhou, Junsheng and Liu, Yu-Shen and Han, Zhizhong},
      booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      year = {2024}
  }

About

[NeurIPS'2024] Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published