Skip to main content

Attention Based Network For No-Reference Ugc Video Quality Assessment

Fuwang Yi, Mianyi Chen, Wei Sun, Xiongkuo Min, Yuan Tian, Guangtao Zhai

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:30
21 Sep 2021

The quality assessment of user generated content (UGC) videos is a challenging problem due to the absence of reference videos and their complex distortions. Traditional no-reference video quality assessment (NR-VQA) algorithms mainly target specific synthetic distortions. Less attention has been paid to authentic distortions in UGC videos, which are not distributed evenly in both the spatial and temporal domains. In this paper, we propose an end-to-end neural network model for UGC videos based on the attention mechanism. The key step in our approach is to embed the attention modules in the feature extraction network, which effectively extracts local distortion information. In addition, to exploit the temporal perception mechanism of the human visual system (HVS), the gated recurrent unit (GRU) and temporal pooling layer are integrated into the proposed model. We validate the proposed model on three public in-the-wild VQA databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm. Experimental results demonstrate that the proposed method outperforms state-of-the-art NR-VQA models.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00