Skip to main content

ESTIMATION OF VISUAL CONTENTS FROM HUMAN BRAIN SIGNALS VIA VQA BASED ON BRAIN-SPECIFIC ATTENTION

Ryo Shichida (Hokkaido University); Ren Togo (Hokkaido University); Keisuke Maeda (Hokkaido University); Takahiro Ogawa (Hokkaido University); Miki Haseyama (Hokkaido University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

This paper presents a method for estimation of visual cognitive contents from human brain signals via a newly derived visual question answering (VQA) model. The proposed method can estimate a wide range of cognitive contents from functional magnetic resonance imaging data when subjects viewed images via the VQA model which can effectively utilize low-level and high-level image features extracted from the viewed images. To make use of multiple image features, we newly introduce brain-specific attention into the VQA model. The brain-specific attention can adaptively determine the significance of image features at each level depending on the complexity of the cognitive contents (e.g., category, pattern, number and color). Therefore, we enable flexible and extensive estimation of various cognitive contents. Experimental results show that the proposed method can significantly improve the performance of the visual content estimation.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00