Skip to main content

On Adversarial Robustness of Large-scale Audio Visual Learning

Juncheng B Li, Xinjian Li, Po-Yao (Bernie) Huang, Florian Metze, Shuhui Qu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:39
08 May 2022

As multi-modal systems are being deployed for safety-critical tasks such as surveillance and malicious content filtering, their robustness remains an under-studied area. Existing work on robustness either lacks scalability to tackle large-scale dataset nor handles multiple modalities. This work aims to study several key questions related to multi-modal learning through the lens of robustness: 1) Is multi-modal models necessarily more robust than uni-modal models? 2) How to efficiently measure the robustness of multi-modal learning in a large-scale dataset? 3) How to fuse different modalities to achieve a more robust multi-modal model? To understand the robustness of the multi-modal model in a large-scale setting, we propose a density-based metric, and a convexity metric to efficiently measure the distribution of each modality in high-dimensional latent space. Our work provides a theoretical intuition together with empirical evidence showing how multi-modal fusion affects adversarial robustness through these metrics. We further devise a mix-up strategy based on our metrics to improve the robustness of the trained model. Our experiments on AudioSet and Kinetics-Sounds verify our hypothesis that multi-modal models are not necessarily more robust than their uni-modal counterparts facing adversarial examples. Our mix-up trained method could achieve as much protection as traditional adversarial training.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00