Estimation Of Visual Features Of Viewed Image From Individual And Shared Brain Information Based On Fmri Data Using Probabilistic Generative Model
Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:41
This paper presents a method for estimation of visual features based on brain responses measured when subjects view images. The proposed method estimates visual features of viewed images by using both individual and shared brain information from functional magnetic resonance imaging (fMRI) data when subjects view images. To extract an effective latent space shared by multiple subjects from high dimensional fMRI data, a probabilistic generative model that can provide a prior distribution to the space is introduced into the proposed method. Also, the extraction of a robust feature space with respect to noise for the individual information becomes feasible via the proposed probabilistic generative model. This is the first contribution of our method. Furthermore, the proposed method constructs a decoder transforming brain information into visual features based on collaborative use of both estimated spaces for individual and shared brain information. This is the second contribution of our method. Experimental results show that the proposed method improves the estimation accuracy of the visual features of viewed images
Chairs:
Toshihisa Tanaka