Predicting Human Interaction via Relative Attention Model

Predicting Human Interaction via Relative Attention Model

Yichao Yan, Bingbing Ni, Xiaokang Yang

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3245-3251. https://doi.org/10.24963/ijcai.2017/453

Predicting human interaction is challenging as the on-going activity has to be inferred based on a partially observed video. Essentially, a good algorithm should effectively model the mutual influence between the two interacting subjects. Also, only a small region in the scene is discriminative for identifying the on-going interaction. In this work, we propose a relative attention model to explicitly address these difficulties. Built on a tri-coupled deep recurrent structure representing both interacting subjects and global interaction status, the proposed network collects spatio-temporal information from each subject, rectified with global interaction information, yielding effective interaction representation. Moreover, the proposed network also unifies an attention module to assign higher importance to the regions which are relevant to the on-going action. Extensive experiments have been conducted on two public datasets, and the results demonstrate that the proposed relative attention network successfully predicts informative regions between interacting subjects, which in turn yields superior human interaction prediction accuracy.
Keywords:
Machine Learning: Feature Selection/Construction
Machine Learning: Neural Networks
Machine Learning: Deep Learning