DDNeRF: Depth Distribution Neural Radiance fields
David Dadon
Ohad Fried
Yacov Hel-Or
Department of Computer Science, Reichman University, Israel



Paper [WACV 2023]
Poster
Code
Video


Abstract

The field of implicit neural representation has made significant progress. Models such as neural radiance fields (NeRF), which uses relatively small neural networks, can represent high-quality scenes and achieve state-of-the-art results for novel view synthesis. Training these types of networks, however, is still computationally expensive and the model struggles with real life 360° scenes. In this work, we propose the depth distribution neural radiance field (DDNeRF) , a new method that significantly increases sampling efficiency along rays during training, while achieving superior results for a given sampling budget. DDNeRF achieves this performance by learning a more accurate representation of the density distribution along rays. More specifically, the proposed framework trains a coarse model to predict the internal distribution of the transparency of an input volume along each ray. This estimated distribution then guides the sampling procedure of the fine model. Our method allows using fewer samples during training while achieving better output quality with the same computational resources.


Method

Hierarchical Sampling

Algorithm Overview



DDNeRF full pipeline: (1) Draw a cone in space and split it into relatively uniform intervals along the depth axis. (2) Pass these intervals through an IPE and then through the coarse network to get predictions. (3) Render the coarse RGB image. (4) Approximate the density distribution with respect to the coarse samples (blue dots) and their Gaussian parameters; then sample the fine samples (purple dots). (5) Pass these samples through an IPE and thereafter through the fine network to get predictions. (6) Render the final RGB image and depth map.




Distribution estimation

(a) PDF truncation process. The blue and orange curves are the PDF before and after truncation; gray marks the region outside the section boundaries. (b) Several adjacent truncated PDF distributions. Each orange truncated Gaussian is assigned to an interval; The vertical dashed lines are the interval bounds and the horizontal lines are interval weights. (c) We combine all distributions into one final distribution.

Training process

Training with eight intervals, Mip-NeRF is presented in the first row, DDNeRF in the second row.




Results

Synthetic scenes:

Forward facing scenes:

The first row in each scene was generated by DDNeRF and the second row by Mip-NeRF.

Real world 360° bounded\unbounded scenes:




Citation


@InProceedings{Dadon_2023_WACV,
    author    = {Dadon, David and Fried, Ohad and Hel-Or, Yacov},
    title     = {DDNeRF: Depth Distribution Neural Radiance Fields},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {755-763}
}
					


Acknowledgements

This work was supported by the Israeli Ministry of Science and Technology under The National Foundation for Applied Science (MIA), and by the Israel Science Foundation (grant No. 1574/21).




Webpage template from here.