Skip to main content

Decoding musical pitch from human brain activity with automatic voxel-wise whole-brain fMRI feature selection

Vincent K.M. Cheung (Sony Computer Science Laboratories, Inc.); Yueh-Po Peng (Institute of Information Science, Academia Sinica); Jing-Hua Lin (Academia Sinica); Li Su (Academia Sinica)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Decoding models seek to infer stimulus or task information from neural activity and play a central role in brain-computer interfaces. However, the high spatial resolution of fMRI means that the number of available features far exceeds the number of trials in a typical experiment. Although a common approach is to restrict features to a priori-defined regions of interest, related information present in other brain regions are consequently omitted. Here, we propose a two-stage thresholding approach that automatically pools relevant voxels from the whole-brain to enhance decoding performance. Testing on an fMRI dataset of 20 subjects, we show that our approach significantly improves regression performance in decoding musical pitch value by 2-fold compared to restricting voxels to the auditory cortex. We further examine properties of the selected voxels, and compare performance between random forest and convolutional neural network decoders.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00