Visual Similarity Attention

Visual Similarity Attention

Meng Zheng, Srikrishna Karanam, Terrence Chen, Richard J. Radke, Ziyan Wu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1728-1735. https://doi.org/10.24963/ijcai.2022/241

While there has been substantial progress in learning suitable distance metrics, these techniques in general lack transparency and decision reasoning, i.e., explaining why the input set of images is similar or dissimilar. In this work, we solve this key problem by proposing the first method to generate generic visual similarity explanations with gradient-based attention. We demonstrate that our technique is agnostic to the specific similarity model type, e.g., we show applicability to Siamese, triplet, and quadruplet models. Furthermore, we make our proposed similarity attention a principled part of the learning process, resulting in a new paradigm for learning similarity functions. We demonstrate that our learning mechanism results in more generalizable, as well as explainable, similarity models. Finally, we demonstrate the generality of our framework by means of experiments on a variety of tasks, including image retrieval, person re-identification, and low-shot semantic segmentation.
Keywords:
Computer Vision: Interpretability and Transparency
Computer Vision: Image and Video retrieval 
Computer Vision: Representation Learning
Computer Vision: Segmentation
Computer Vision: Transfer, low-shot, semi- and un- supervised learning