EDyGS: Event Enhanced Dynamic 3D Radiance Fields from Blurry Monocular Video
EDyGS: Event Enhanced Dynamic 3D Radiance Fields from Blurry Monocular Video
Mengxu Lu, Zehao Chen, Yan Liu, De Ma, Huajin Tang, Qian Zheng, Gang Pan
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1684-1692.
https://doi.org/10.24963/ijcai.2025/188
The task of generating novel views in dynamic scenes plays a critical role in the 3D vision domain. Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) have shown great promise in this domain but struggle with motion blur, which often arises in real-world scenarios due to camera or object motion. Existing methods address camera motion blur but fall short in dynamic scenes, where the coupling of camera and object motion complicates multi-view consistency and temporal coherence. In this work, we propose EDyGS, a model designed to reconstruct sharp novel views from event streams and monocular videos of dynamic scenes with motion blur. Our approach introduces a motion-mask 3D Gaussian model that assigns each Gaussian an additional attribute to distinguish between static and dynamic regions. By leveraging this motion mask field, we separate and optimize the static and dynamic regions independently. A progressive learning strategy is adopted, where static regions are reconstructed by jointly optimizing camera poses and learnable 3D Gaussians, while dynamic regions are modeled using an implicit deformation field alongside learnable 3D Gaussians. We conduct both quantitative and qualitative experiments on synthetic and real-world data. Experimental results demonstrate that EDyGS effectively handles blurry inputs in dynamic scenes.
Keywords:
Computer Vision: CV: 3D computer vision
Computer Vision: CV: Computational photography
