Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks

Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks

Feiwu Yu, Xinxiao Wu, Yuchao Sun, Lixin Duan

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 1107-1113. https://doi.org/10.24963/ijcai.2018/154

Existing deep learning methods of video recognition usually require a large number of labeled videos for training. But for a new task, videos are often unlabeled and it is also time-consuming and labor-intensive to annotate them. Instead of human annotation, we try to make use of existing fully labeled images to help recognize those videos. However, due to the problem of domain shifts and heterogeneous feature representations, the performance of classifiers trained on images may be dramatically degraded for video recognition tasks. In this paper, we propose a novel method, called Hierarchical Generative Adversarial Networks (HiGAN), to enhance recognition in videos (i.e., target domain) by transferring knowledge from images (i.e., source domain). The HiGAN model consists of a \emph{low-level} conditional GAN and a \emph{high-level} conditional GAN. By taking advantage of these two-level adversarial learning, our method is capable of learning a domain-invariant feature representation of source images and target videos. Comprehensive experiments on two challenging video recognition datasets (i.e. UCF101 and HMDB51) demonstrate the effectiveness of the proposed method when compared with the existing state-of-the-art domain adaptation methods.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Unsupervised Learning
Computer Vision: Action Recognition