OPTIMIZED QUALITY FEATURE LEARNING FOR VIDEO QUALITY ASSESSMENT
Ngai-Wing Kwong (The Hong Kong Polytechnic University); Yui-Lam Chan ( The Hong Kong Polytechnic University ); Sik-Ho Tsang (Centre for Advances in Reliability and Safety (CAiRS)); Daniel P.K. Lun (The Hong Kong Polytechnic University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Recently, some transfer learning-based methods have been adopted in video quality assessment (VQA) to compensate for the lack of enormous training samples and human annotation labels. But these methods induce a domain gap between source and target domains, resulting in a sub-optimal feature representation that deteriorates the accuracy. This paper proposes the optimized quality feature learning via a multi-channel convolutional neural network (CNN) with the gated recurrent unit (GRU) for no-reference (NR) VQA. First, inspired by self-supervised learning, the multi-channel CNN is pre-trained on the image quality assessment (IQA) domain without using human annotation labels. Then, semi-supervised learning is used to fine-tune CNN and transfer the knowledge from IQA to VQA while considering motion-aware information for better quality feature learning. Finally, all frame quality features are extracted as the input of GRU to obtain video quality. Experimental results demonstrate that our model achieves better performance than state-of-the-art VQA approaches.