USING MACHINE LEARNING TO UNDERSTAND THE RELATIONSHIPS BETWEEN AUDIOMETRIC DATA, SPEECH PERCEPTION, TEMPORAL PROCESSING, AND COGNITION
Rana Khalil (University of Maryland - College Park); Alexandra Papanicolaou (University of Maryland - College Park); Renee Chou (University of Maryland - College Park); Bobby Gibbs (University of Maryland - College Park); Samira B Anderson (University of Maryland); Sandra Gordon-Salant (University of Maryland - College Park); Michael Cummings (University of Maryland - College Park); Matthew J. Goupell (University of Maryland - College Park)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Aging and hearing loss cause communication difficulties, particularly for speech perception in demanding situations, which have been associated with factors including cognitive processing and extended high-frequency (>8 kHz) hearing. Quantifying such associations and finding other (possibly unintuitive) associations is well suited to machine learning. We constructed ensemble models for 443 participants who varied in age and hearing loss, using audiometric, perceptual, electrophysiological, and cognitive data to predict measured performance and self-reported difficulties, and new acrossfrequency threshold composite variables. Speech perception was best predicted by variables associated with audiometric thresholds between 1–4 kHz, followed by basic temporal processing ability. Cognitive factors and extended high-frequency thresholds had little to no predictive ability of speech perception. Future associations or lack thereof will inform the field as we attempt to better understand the intertwined effects of speech perception, aging, hearing loss, and cognition.