Characterizing Similarity of Visual Stimulus from Associated Neuronal Response

Characterizing Similarity of Visual Stimulus from Associated Neuronal Response

Vikram Ravindra, Ananth Grama

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 608-614. https://doi.org/10.24963/ijcai.2020/85

The problem of characterizing brain functions such as memory, perception, and processing of stimuli has received significant attention in neuroscience literature. These experiments rely on carefully calibrated, albeit complex inputs, to record brain response to signals. A major problem in analyzing brain response to common stimuli such as audio-visual input from videos (e.g., movies) or story narration through audio books, is that observed neuronal responses are due to combinations of ``pure'' factors, many of which may be latent. In this paper, we present a novel methodological framework for deconvolving the brain's response to mixed stimuli into its constituent responses to underlying pure factors. This framework, based on archetypal analysis, is applied to the analysis of imaging data from an adult cohort watching the BBC show, Sherlock. By focusing on visual stimulus, we show strong correlation between our observed deconvolved response and third-party textual video annotations -- demonstrating the significant power of our analyses techniques. Building on these results, we show that our techniques can be used to predict neuronal responses in new subjects (how other individuals react to Sherlock), as well as to new visual content (how individuals react to other videos with known annotations). This paper reports on the first study that relates video features with neuronal responses in a rigorous algorithmic and statistical framework based on deconvolution of observed mixed imaging signals using archetypal analysis.
Keywords:
Computer Vision: Biomedical Image Understanding
Machine Learning: Feature Selection; Learning Sparse Models
Humans and AI: Brain Sciences