We use cookies to improve your experience with our site.
Kuo Xu, Jie Li, Zhen-Qiang Li, Yang-Jie Cao. SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-024-4157-6
Citation: Kuo Xu, Jie Li, Zhen-Qiang Li, Yang-Jie Cao. SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-024-4157-6

SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis

  • Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene opti- mization, which limits their practical applications. We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF. Firstly, we construct an improved multi- view stereo structure based on convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images, and then these features are aggregated by multi-head attention as the input of the neural radiance field. This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference, thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes. We tested the generalization ability on DTU real unseen scenes, and our PSNR improved by 3.14 compared with the baseline method under the same input conditions. In addition, if the scene has dense input views available, the average PSNR can be improved by 1.04 through further refinement training in a short time, and a higher quality rendering effect can be obtained.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return