DEMI: Deep Video Quality Estimation Model using Perceptual Video Quality Dimensions
Saman Zadtootaghaj, Nabajeet Barman, Rakesh Rao Ramachandra Rao, Steve Göring, Maria Martini, Alexander Raake, Sebastian Möller
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 09:55
With the advent and integration of gaming video streaming on traditional platforms such as YouTubeGaming and Facebook Gaming, it is imperative that the quality estimation metrics proposed work both for gaming and non-gaming content. Existing works in the field of quality assessment focus separately on gaming and non-gaming content. Along with the traditional modeling approaches, deep learning based approaches have been used to develop quality models, due to their high prediction accuracy. Hence, we present in this paper a deep learning based quality estimation model considering both gaming and non-gaming videos. The model is developed in three phases. First, a convolutional neural network (CNN) is trained based on an objective metric which allows the CNN to learn video artifacts such as blurriness and blockiness. Next, the model is fine-tuned based on a small image quality dataset using blockiness and blurriness ratings. Finally, a Random Forest was used to pool frame-level predictions and temporal information of videos in order to predict the overall video quality.