Single-branch Network for Multimodal Training
Muhammad Saad Saeed (University of Engineering and Technology); Shah Nawaz (German Electron Synchrotron); Muhammad Haris Khan (Muhammad Bin Zayed University of Artificial Intelligence); Muhammad Zaigham Zaheer (Mohamed bin Zayed University of Artificial Intelligence); Karthik Nandakumar ( Mohamed Bin Zayed University of Artificial Intelligence); Mohammad Haroon Yousaf (UET Taxila, Pakistan); Arif Mahmood (Information Technology University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
With the rapid growth of social media platforms, users are sharing billions of multimedia posts containing audio, images, and text. Researchers have focused on building autonomous systems capable of processing such multimedia data to solve challenging multimodal tasks including cross-modal retrieval, matching, and verification. Existing works use separate networks to extract embeddings of each modality to bridge the gap between them. The modular structure of their branched networks is fundamental in creating numerous multimodal applications and has become a defacto standard to handle multiple modalities. In contrast, we propose a novel single-branch network capable of learning discriminative representation of unimodal as well as multimodal tasks without changing the network. An important feature of our single-branch network is that it can be trained either using single or multiple modalities without sacrificing performance. We evaluated our proposed single-branch network on the challenging multimodal problem (face-voice association) for cross-modal verification and matching tasks with various loss formulations.
Experimental results demonstrate the superiority of our proposed single-branch network over the existing methods in a wide range of experiments. Code: \href{https://github.com/msaadsaeed/SBNet}{https://github.com/msaadsaeed/SBNet}