Event-driven Video Deblurring via Spatio-Temporal Relation-Aware Network

Event-driven Video Deblurring via Spatio-Temporal Relation-Aware Network

Chengzhi Cao, Xueyang Fu, Yurui Zhu, Gege Shi, Zheng-Jun Zha

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 799-805. https://doi.org/10.24963/ijcai.2022/112

Video deblurring with event information has attracted considerable attention. To help deblur each frame, existing methods usually compress a specific event sequence into a feature tensor with the same size as the corresponding video. However, this strategy neither considers the pixel-level spatial brightness changes nor the temporal correlation between events at each time step, resulting in insufficient use of spatio-temporal information. To address this issue, we propose a new Spatio-Temporal Relation-Attention network (STRA), for the specific event-based video deblurring. Concretely, to utilize spatial consistency between the frame and event, we model the brightness changes as an extra prior to aware blurring contexts in each frame; to record temporal relationship among different events, we develop a temporal memory block to restore long-range dependencies of event sequences continuously. In this way, the complementary information contained in the events and frames, as well as the correlation of neighboring events, can be fully utilized to recover spatial texture from events constantly. Experiments show that our STRA significantly outperforms several competing methods, e.g., on the HQF dataset, our network achieves up to 1.3 dB in terms of PSNR over the most advanced method. The code is available at https://github.com/Chengzhi-Cao/STRA.
Keywords:
Computer Vision: Computational photography
Computer Vision: Applications