Hallucinating Optical Flow Features for Video Classification

Hallucinating Optical Flow Features for Video Classification

Yongyi Tang, Lin Ma, Lianqiang Zhou

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 926-932. https://doi.org/10.24963/ijcai.2019/130

Appearance and motion are two key components to depict and characterize the video content. Currently, the two-stream models have achieved state-of-the-art performances on video classification. However, extracting motion information, specifically in the form of optical flow features, is extremely computationally expensive, especially for large-scale video classification. In this paper, we propose a motion hallucination network, namely MoNet, to imagine the optical flow features from the appearance features, with no reliance on the optical flow computation. Specifically, MoNet models the temporal relationships of the appearance features and exploits the contextual relationships of the optical flow features with concurrent connections. Extensive experimental results demonstrate that the proposed MoNet can effectively and efficiently hallucinate the optical flow features, which together with the appearance features consistently improve the video classification performances. Moreover, MoNet can help cutting down almost a half of computational and data-storage burdens for the two-stream video classification. Our code is available at: https://github.com/YongyiTang92/MoNet-Features
Keywords:
Computer Vision: Computer Vision
Computer Vision: Video: Events, Activities and Surveillance
Computer Vision: Action Recognition