Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:03:54
28 Mar 2022

Deep neural networks suffer from Catastrophic Forgetting (CF) on old tasks when they are trained to learn new tasks sequentially since the parameters of the model will change to optimize on the new class. The problem of alleviating CF is of interest to the Computer-aided diagnostic (CAD) systems community to facilitate class incremental learning (IL): learn new classes as and when new data/annotations are made available and old data is no longer accessible. However, IL has not been explored much in CAD development. We propose a novel approach that ensures that a model remembers the causal factor behind the decisions on the old classes, while incrementally learning new classes. We introduce a common auxiliary task during the course of incremental training, whose hidden representations are shared across all the classification heads. Since the hidden representation is no longer task-specific, it leads to a significant reduction in CF. We demonstrate our approach by incrementally learning 5 different tasks on Chest-Xrays and compare the results with the state-of-the-art regularization methods. Our approach performs consistently well in reducing CF in all the tasks with almost zero CF in most of the cases, unlike standard regularisation-based approaches.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00