Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:35
04 May 2020

Electroencephalography (EEG) data can be used to decode an attended speech source in normal-hearing (NH) listeners. One application of this technology consists of identifying the target speaker in a cocktail party\text{-}like scenario and activate speech enhancement algorithms in cochlear implants (CIs). It has been shown that in CIs it is possible to decode selective attention, although the worse spectral resolution and the electrical artifacts introduced by a CI decrease the accuracy of linear decoders in comparison to NH subjects. The goal of this work was to investigate the use of non\text{-}linear models based on deep neural networks (DNNs) to improve the selective attention decoding accuracy in CI users. The hypothesis is that a non\text{-}linear decoder may be able to better separate the electrical artifact from the neural responses. Results confirm the feasibility to decode selective attention by means of single-trial EEG data in NH and CI users using a high-density EEG. Moreover, we show that a simple DNN architecture that directly classifies the locus of attention based on the EEG and a mixture of the incoming speech stream envelopes can be used to decode selective attention in CI users.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00