RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models

RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models

Xingchen Zhou, Ying He, F. Richard Yu, Jianqiang Li, You Li

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 1813-1821. https://doi.org/10.24963/ijcai.2023/201

The emergence of Neural Radiance Fields (NeRF) has promoted the development of synthesized high-fidelity views of the intricate real world. However, it is still a very demanding task to repaint the content in NeRF. In this paper, we propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes. Our work leverages existing diffusion models to guide changes in the designated 3D content. Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects, which can improve the editability, diversity, and application range of NeRF. Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts, including editing appearance, shape, and more. We validate our method on both real-world datasets and synthetic-world datasets for these editing tasks. Please visit https://repaintnerf.github.io for a better view of our results.
Keywords:
Computer Vision: CV: 3D computer vision
Computer Vision: CV: Applications
Computer Vision: CV: Neural generative models, auto encoders, GANs