Skip to main content

Audio-Visual Event Recognition Through The Lens Of Adversary

Juncheng Li, Kaixin Ma, Shuhui Qu, Po-Yao Huang, Florian Metze

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:42
10 Jun 2021

As audio/visual classification models are widely deployed for sensitive tasks like content filtering at scale, therefore, it is critical to understand their robustness. This work aims to study several key issues related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/late fusion in terms of robustness 2) How does different frequency/time domain feature contribute to the robustness? 3) How does different neural modules contribute against the adversarial noise? In our experiment, we construct adversarial examples to attack state-of-the-art neural models trained on Google AudioSet and analyzed how much attack potency in terms of $\epsilon$ using different $L_p$ norms we would need to ``deactivate" the victim model. Using adversarial noise to dissect multi-modal models, we are able to provide an insight into what could be the best fusion strategy to balance the model parameters/accuracy and robustness trade-off and distinguish the robust features versus the non-robust features that various neural networks tend to learn.

Chairs:
Kostantinos Drossos

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00