A DNN-based hearing-aid strategy for real-time processing: One size fits all
Fotios Drakopoulos (Ghent University); Arthur Van Den Broucke (Ghent University); Sarah Verhulst (Ghent University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Although hearing aids (HAs) can compensate for elevated hearing thresholds using sound amplification, they often fail to restore auditory perception in adverse listening conditions. To achieve robust treatment outcomes for diverse HA users, we use a differentiable framework that can compensate for impaired auditory processing based on a biophysically realistic and personalisable auditory model. Here, we present a deep-neural-network (DNN) HA processing strategy that can provide individualised sound processing for the audiogram of a listener using a single model architecture. The DNN architecture was trained to compensate for different audiogram inputs and was able to enhance simulated responses and intelligibility even for audiograms that were not part of training. Our multi-purpose HA model can be used for different individuals and can process audio inputs of 3.2 ms in <0.5 ms, thus paving the way for precise DNN-based treatments of hearing loss that can be embedded in hearing devices.