Learning the Compositional Visual Coherence for Complementary Recommendations

Learning the Compositional Visual Coherence for Complementary Recommendations

Zhi Li, Bo Wu, Qi Liu, Likang Wu, Hongke Zhao, Tao Mei

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3536-3543. https://doi.org/10.24963/ijcai.2020/489

Complementary recommendations, which aim at providing users product suggestions that are supplementary and compatible with their obtained items, have become a hot topic in both academia and industry in recent years. Existing work mainly focused on modeling the co-purchased relations between two items, but the compositional associations of item collections are largely unexplored. Actually, when a user chooses the complementary items for the purchased products, it is intuitive that she will consider the visual semantic coherence (such as color collocations, texture compatibilities) in addition to global impressions. Towards this end, in this paper, we propose a novel Content Attentive Neural Network (CANN) to model the comprehensive compositional coherence on both global contents and semantic contents. Specifically, we first propose a Global Coherence Learning (GCL) module based on multi-heads attention to model the global compositional coherence. Then, we generate the semantic-focal representations from different semantic regions and design a Focal Coherence Learning (FCL) module to learn the focal compositional coherence from different semantic-focal representations. Finally, we optimize the CANN in a novel compositional optimization strategy. Extensive experiments on the large-scale real-world data clearly demonstrate the effectiveness of CANN compared with several state-of-the-art methods.
Keywords:
Multidisciplinary Topics and Applications: Recommender Systems
Data Mining: Mining Text, Web, Social Media
Humans and AI: Personalization and User Modeling
Machine Learning: Deep Learning