Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:30
04 May 2020

The presence of auditory and visual senses enables humans to obtain a profound understanding of the real-world scenes. While audio and visual signals are capable of providing scene knowledge individually, the combination of both offers a better insight about the underlying event. In this paper, we address the problem of audio-visual event localization where the goal is to identify the presence of an event that is both audible and visible in a video, using fully or weakly supervised learning. For this, we propose a novel Audio-Visual Interacting Network (AVIN) that enables inter as well as intra modality interactions by exploiting the local and global information of the two modalities. Our empirical evaluations confirm the superiority of our proposed model over the existing state-of-the-art methods, in both fully as well as weakly supervised learning tasks, thus asserting the efficacy of our joint-modeling.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00