Visual Encoding and Decoding of the Human Brain Based on Shared Features

Visual Encoding and Decoding of the Human Brain Based on Shared Features

Chao Li, Baolin Liu, Jianguo Wei

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 738-744. https://doi.org/10.24963/ijcai.2020/103

Using a convolutional neural network to build visual encoding and decoding models of the human brain is a good starting point for the study on relationship between deep learning and human visual cognitive mechanism. However, related studies have not fully considered their differences. In this paper, we assume that only a portion of neural network features is directly related to human brain signals, which we call shared features. In the encoding process, we extract shared features from the lower and higher layers of the neural network, and then build a non-negative sparse map to predict brain activities. In the decoding process, we use back-propagation to reconstruct visual stimuli, and use dictionary learning and a deep image prior to improve the robustness and accuracy of the algorithm. Experiments on a public fMRI dataset confirm the rationality of the encoding models, and comparing with a recently proposed method, our reconstruction results obtain significantly higher accuracy.
Keywords:
Computer Vision: Biomedical Image Understanding
Humans and AI: Cognitive Modeling
Humans and AI: Human-Computer Interaction
Machine Learning: Deep Learning: Convolutional networks