Towards Adversarially Robust Continual Learning
Tao Bai (Nanyang Technological University); Chen Chen (Sony AI); Lingjuan Lyu (Sony AI); Jun Zhao (Nanyang Technological University); Bihan Wen (Nanyang Technological University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Recent studies show that models trained by continual learning can achieve comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world. Deep learning models, however, are shown to be vulnerable to adversarial attacks. Though there are many studies on the model robustness in the context of standard supervised learning, protecting continual learning from adversarial attacks has not yet been investigated. To fill in this research gap, we are the first to
study adversarial robustness in continual learning and propose a novel method called Task-Aware Boundary Augmentation (TABA) to boost the robustness of continual learning models. With extensive experiments on CIFAR-10 and CIFAR-100, we show the efficacy of adversarial training and TABA in defending adversarial attacks.