Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:59
12 May 2022

Audio and visual signals stimulate many audio-visual sensory neurons of persons to generate audio-visual contents, helping humans perceive the world. Most of the existing audio-visual event localization approaches focus on generating audio-visual features by fusing the audio and visual modalities for final predictions. However, an audio-visual adjustment mechanism exists in a complicated multi-modal perception system. Inspired by this observation, we propose a novel bi-directional modality fusion network (BMFN), which not only simply fuses audio and visual features, but also adjusts the fused features to increase their representativeness with the help of the original audio and visual contents. The high-level audio-visual features achieved from two directions with two forward-backward fusion modules and a mean operation are summarized for the final event localization. Experimental results demonstrate that our method outperforms state-of-the-art works in both fully- and weakly- supervised learning settings. The code is available at https://github.com/weizequan/BMFN.git.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00