Skip to main content

Self-Supervised Learning: Overview And Application To Medical Imaging

Pavan Annangi, Deepa Anand, Hemant Kumar Aggarwal, Hariharan Ravishankar, Rahul Venkataramani

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 02:49:59
28 Mar 2022

Supervised learning has achieved tremendous progress making it the ubiquitous tool of choice in nearly all learning applications. However, the success of supervised learning largely depends on the quantity and quality of labelled datasets, which is prohibitively expensive in healthcare settings. A recent technique, termed �self-supervised learning’ (SSL) aims to exploit the vast amounts of relatively inexpensive unlabeled data to learn meaningful representations that reduce the annotation burden. Self-supervised learning is a form of unsupervised learning that extracts latent information encoded inside the input dataset to train a neural network for the end task. Self-supervised learning relies on input dataset to obtain the target for the training loss estimation (self-supervision). Self-supervision is particularly relevant for researchers from the medical community for several reasons including: 1) cost and feasibility of annotating large datasets 2) limitations of transfer learning – (data type (2D+t, 3D), data distribution shift (grayscale images limited to anatomies), problem types (segmentation, reconstruction). Through this special session, we will attempt to introduce self-supervised learning, popular architectures and successful use case particularly in the medical imaging domain. The initial successes in self-supervised learning followed a template of designing pretext tasks (tasks with labels derived from data itself, e.g., colorization, jigsaw etc.) followed by utilizing the learnt representations on the downstream task of interest. However, in recent years, these methods have largely been replaced by contrastive learning and regularization-based methods (virtual target embeddings, high entropy embedding vectors). In this talk, we will review the most popular methods to perform self-supervised learning and its applications. Despite the obvious need for SSL, the application of self-supervised learning poses a challenge due to the differences in problem type. We will discuss methods developed in-house to extend the SSL techniques to classification and segmentation use-cases. The subsequent section of the talk would focus on Self Supervised techniques for compressed sensing (CS) problems. The classical CS-based methods rely only on noisy and undersampled measurements to reconstruct the fully sampled image. These methods exploit the imaging physics to reconstruct a data-consistent image utilizing an iterative algorithm but are comparatively slow. Model-based deep learning methods combine the power of classical CS-based methods and deep learning. These methods are extended for SSL using Ensembled Stein Unbiased Risk Estimator (ENSURE) that can approximate the projected mean-square-error (MSE) as true MSE. We will also discuss some of the empirical rules that have aided in our experiments on training SSL methods.

Value-Added Bundle(s) Including this Product