Boosting No-Reference Super-Resolution Image Quality Assessment with Knowledge Distillation and Extension
Haiyu Zhang (Northwestern Polytechnical University); Shaolin Su (Northwestern Polytechnical University); Yu Zhu (Northwestern Polytechnical University); Jinqiu Sun (Northwestern Polytechnical University); Yanning Zhang (Northwestern Polytechnical University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Deep learning (DL) based image super-resolution (SR) techniques have been well investigated for recent years. However, studies dedicated to SR image quality assessment (SR-IQA) have not been fully developed, which is even more difficult if pristine high-resolution (HR) images are lacking as a reference. Due to the challenge, existing widely used no-reference (NR) SR-IQA metrics (e.g., PI, NIQE, and Ma) are still far from meeting the practical requirements of providing accurate estimations which align well with human mean opinion scores (MOS). To this end, we propose a novel Knowledge Extension Super-Resolution Image Quality Assessment (KE-SR-IQA) framework to predict SR image quality by leveraging a semi-supervised knowledge distillation (KD) strategy. Concretely, we first employ a well-trained full-reference (FR) SR-IQA model as the teacher, then we perform knowledge extension (KE) by additional pseudo-labeled data to further distill a NR-student for promoting the prediction accuracy. Extensive experiments on several benchmarks validate the effectiveness of our approach.