Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:15:12
22 Sep 2021

We consider the problem of robust no reference (NR) video quality assessment (VQA) where the algorithms need to have good generalization performance when they are trained and tested on different datasets. We specifically address this question in the context of predicting video quality for compression and transmission applications. Motivated by the success of the spatio-temporal entropic differences video quality predictor in this context, we design a framework using convolutional neural networks to predict spatial and temporal entropic differences without the need for a reference or human opinion score. This approach enables our model to capture both spatial and temporal distortions effectively and allows for robust generalization. We evaluate our algorithms on a variety of datasets and showsuperior cross database performance when compared to state of the art NR VQA algorithms.