I²R-Net: Intra- and Inter-Human Relation Network for Multi-Person Pose Estimation

I²R-Net: Intra- and Inter-Human Relation Network for Multi-Person Pose Estimation

Yiwei Ding, Wenjin Deng, Yinglin Zheng, Pengfei Liu, Meihong Wang, Xuan Cheng, Jianmin Bao, Dong Chen, Ming Zeng

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 855-862. https://doi.org/10.24963/ijcai.2022/120

In this paper, we present the Intra- and Inter-Human Relation Networks I²R-Net for Multi-Person Pose Estimation. It involves two basic modules. First, the Intra-Human Relation Module operates on a single person and aims to capture Intra-Human dependencies. Second, the Inter-Human Relation Module considers the relation between multiple instances and focuses on capturing Inter-Human interactions. The Inter-Human Relation Module can be designed very lightweight by reducing the resolution of feature map, yet learn useful relation information to significantly boost the performance of the Intra-Human Relation Module. Even without bells and whistles, our method can compete or outperform current competition winners. We conduct extensive experiments on COCO, CrowdPose, and OCHuman datasets. The results demonstrate that the proposed model surpasses all the state-of-the-art methods. Concretely, the proposed method achieves 77.4% AP on CrowPose dataset and 67.8% AP on OCHuman dataset respectively, outperforming existing methods by a large margin. Additionally, the ablation study and visualization analysis also prove the effectiveness of our model.
Keywords:
Computer Vision: Biometrics, Face, Gesture and Pose Recognition
Computer Vision: Action and Behaviour Recognition
Computer Vision: Recognition (object detection, categorization)