Skip to main content

Defense against black-box adversarial attacks via heterogeneous fusion features

Jiahuan Zhang (Hokkaido University); Keisuke Maeda (Hokkaido University); Takahiro Ogawa (Hokkaido University); Miki Haseyama (Hokkaido University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

This paper presents an effective approach for the adversarial defense task named a heterogeneous feature fusion network (HFFN). Inspired by the fact that humans can utilize multimodal information to help themselves perceive objects, we introduce the caption features into the classic convolutional neural networks (CNNs) and fuse them with traditional image features. To reduce the ``modal gap'' between heterogeneous features, we introduce the hetero-center loss that can constrain the distance between the class centers of different modalities. In addition, we further integrate the guided complement entropy loss into HFFN, which can restrict the prediction probability of the model on non-ground truth classes, so that the adversarial robustness of fused heterogeneous features can be deeply improved. We further adopt three state-of-the-art comparison approaches and design four ablation methods to evaluate the effectiveness and adversarial robustness of HFFN. Moreover, in order to compare the performance of HFFN with the comparison methods and ablation methods, we also utilize a substitute model to generate a variety range of adversarial examples. Extensive experiments based on these adversarial examples for the CIFAR-10 and CIFAR-100 datasets exhaustively and strongly demonstrate the superiority of our method.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00