Skip to main content
  • SPS
    Members: $10.00
    IEEE Members: $22.00
    Non-members: $30.00

About this Bundle

Tutorial Bundle: Tutorial: Hearables: Real World Applications of Interpretable AI for eHealth (Parts 1-2), ICASSP 2024

The Hearables paradigm, that is, in-ear sensing of neural function and vital signs, is an emerging solution for 24/7 discrete health monitoring. The tutorial starts by introducing our own Hearables device, which is based on an earplug with the embedded electrodes, optical, acoustic, mechanical and temperature sensors. We show how such a miniaturised embedded system can be can used to reliably measure the Electroencephalogram (EEG), Electrocardiogram (ECG), Photoplethysmography (PPG), respiration, temperature, blood oxygen levels, and behavioural cues. Unlike standard wearables, such an inconspicuous Hearables earpiece benefits from the relatively stable position of the ear canal with respect to vital organs to operate robustly during daily activities. However, this comes at a cost of weaker signal levels and exposure to noise. This opens novel avenues of research in Machine Intelligence for eHealth, with numerous challenges and opportunities for algorithmic solutions. We describe how our hearables sensor can be used, inter alia, for the following applications: Automatic sleep scoring based on in-ear EEG, as sleep disorders are a major phenomenon which undercuts general health problems, from endocrinology through to depression and dementia. Screening for chronic obstructive pulmonary disease based on in-ear PPG, in the battle against the third leading cause of death worldwide, with an emphasis on developing countries that often lack access to hospital-based examinations. Continuous 24/7 ECG from a headphone with the ear-ECG, as cardiac diseases are the most common cause of death, but often remain undetected as until the emergence of Hearables it was only possible to record ECG in a clinic and not in the community. For the Hearables to provide a paradigm shift in eHealth, they require domain-aware Machine Intelligence, to detect, estimate, and classify the notoriously weak physiological signals from the ear-canal. To this end, the second part of our tutorial is focused on interpretable AI. This is achieved through a first principles matched-filtering explanation of convolutional neural networks (CNNs), introduced by us. We next revisit the operation of CNNs and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Interpretable networks are pivotal in the integration of AI into medicine, by dispelling the black box nature of deep learning and allowing clinicians to make informed decisions based off network outputs. We demonstrate this in the context of Hearables by expanding on the following key findings: We argue from first principles that convolutional neural networks operate as matched filters. Through this lens, we further examine network weights, activation functions and pooling operations. We detail the construction of a fully interpretable convolutional neural network designed for R-peak detection, demonstrating its operation as a matched filter and analysing the convergence of its filter weights to an ECG pattern. Owing to their unique Collocated Sensing nature, Hearables record a rich admixture of information from several physiological variables, motion and muscle artefacts and noise. For example, even a standard Electroencephalogram (EEG) measurement contains a weak ECG and muscle artefacts, which are typically treated as bad data and are subsequently discarded. In the quest to exploit all the available information (no data is bad data), the final section of the tutorial focuses on a novel class of encoder-decoder networks which, taking the advantage from the collocation of information, maximise data utility. We introduce the novel concept of a Correncoder and demonstrate its ability to learn a shared latent space between the model input and output, making it a deep-NN generalisation of partial least squares (PLS). The key topics of the final section of this tutorial are as follows: A thorough explanation of Partial Least Squares (Projection on Latent Spaces) regression, and the lens of interpreting deep learning models as an extension of PLS. An introduction to the Correncoder and Deep Correncoder, a powerful yet efficient deep learning framework to extract correlated information between input and references. Real-world examples of the Correncoder to Hearables data are presented, ranging from transforming Photoplethysmography (PPG) into respiratory signals, through to making sense from artefacts and decoding implanted brain electrical signals into movement. In summary, this tutorial details how the marriage of the emerging but crucially sensing modality of Hearables and customised interpretable deep learning models can maximise the utility of wearables data for healthcare applications, with a focus on the long-term monitoring of chronic diseases. Wearable in-ear sensing for automatic screening and monitoring of disease has the potential for immense global societal impact, and for personalised healthcare out-of-clinic and in the community – the main aims of the future eHealth. The presenters are a perfect match for the topic of this tutorial, Prof Mandic’s team are pioneers of Hearables and the two presenters have been working together over the last several years on the links between Signal Processing, Embedded systems and Connected Health; the presenters also hold three international patents in this area. Tutorial Outline The tutorial with involve both the components of the Hearables paradigm and the Interpretable AI solutions for 24/7 wearable sensing in the real-world. The duration will be over 3 hours, with the following topics covered: The Hearables paradigm. Here, we will cover the Biophysics supporting in-ear sensing the neural function and vital signs, together with the corresponding COMSOL Multiphysics simulations and the real-world recordings of the Electroencephalogram (EEG), Electrocardiogram (ECG), Photoplethysmogram (PPG), respiration, blood oxygen level (SpO2), temperature, movement and sound – all from an earplug with embedded sensors. (40 minutes) Automatic Sleep Staging and Cognitive Load Estimation from Hearables. Here we demonstrate two real-world applications of Hearables, with in-ear polysomnography enabling unobtrusive in-home sleep monitoring, and robust tracking of cognitive workload during memory tasks and gaming and their links with dementia. (30 minutes) Interpretable Convolutional Neural Networks (CNN). This section explains CNNs through the lens of the matched filter (MF), a seven-decade old core concept in signal processing theory. This section of the tutorial finishes with the example of a deep Matched Filter that is designed for robust R-peak detection in noisy Ear-ECG. (40 minutes) Physiologically informed data augmentation. Here we build upon our pioneering work on screening for chronic obstructive pulmonary disease (COPD) with in-ear PPG, by detailing an apparatus designed to simulate COPD in healthy individuals. We demonstrate the advantages of using domain knowledge within such an apparatus when producing surrogate data in deep-learning models. (20 minutes) An introduction to the Correncoder. Here we introduce a new rethinking of the classic encoder-decoder structure, with the aim of extracting correlated information between two signals. At each stage, we mirror this model with the method of Projection on Latent Spaces (PLS) showing that this deep learning framework can be interpreted as a deep generalisable PLS. We show multiple real-world applications of such a framework in the context of wearable E-health. (40 minutes) No data is bad data. In this final section of the tutorial, we reject the null hypothesis that data containing artefacts should be discarded, with examples from ear-EEG signal processing. We demonstrate that in many cases rich information can be determined from artefacts, and that with the Corr-encoder framework we can achieve artefact removal in real time. (20 minutes)
24 Oct 2024

More Like This

  • SPS
    Members: $10.00
    IEEE Members: $22.00
    Non-members: $30.00
  • SPS
    Members: $10.00
    IEEE Members: $22.00
    Non-members: $30.00
  • SPS
    Members: $10.00
    IEEE Members: $22.00
    Non-members: $30.00