Watching a Small Portion could be as Good as Watching All: Towards Efficient Video Classification

Watching a Small Portion could be as Good as Watching All: Towards Efficient Video Classification

Hehe Fan, Zhongwen Xu, Linchao Zhu, Chenggang Yan, Jianjun Ge, Yi Yang

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 705-711. https://doi.org/10.24963/ijcai.2018/98

We aim to significantly reduce the computational cost for classification of temporally untrimmed videos while retaining similar accuracy. Existing video classification methods sample frames with a predefined frequency over entire video. Differently, we propose an end-to-end deep reinforcement approach which enables an agent to classify videos by watching a very small portion of frames like what we do. We make two main contributions. First, information is not equally distributed in video frames along time. An agent needs to watch more carefully when a clip is informative and skip the frames if they are redundant or irrelevant. The proposed approach enables the agent to adapt sampling rate to video content and skip most of the frames without the loss of information. Second, in order to have a confident decision, the number of frames that should be watched by an agent varies greatly from one video to another. We incorporate an adaptive stop network to measure confidence score and generate timely trigger to stop the agent watching videos, which improves efficiency without loss of accuracy. Our approach reduces the computational cost significantly for the large-scale YouTube-8M dataset, while the accuracy remains the same.
Keywords:
Computer Vision: Video: Events, Activities and Surveillance
Machine Learning Applications: Applications of Reinforcement Learning