Deep Opinion-Unaware Blind Image Quality Assessment by Learning and Adapting from Multiple Annotators

Deep Opinion-Unaware Blind Image Quality Assessment by Learning and Adapting from Multiple Annotators

Zhihua Wang, Xuelin Liu, Jiebin Yan, Jie Wen, Wei Wang, Chao Huang

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 2036-2044. https://doi.org/10.24963/ijcai.2025/227

Existing deep neural network (DNN)-based blind image quality assessment (BIQA) methods primarily rely on human-rated datasets for training. However, collecting human labels is extremely time-consuming and labor-intensive, posing a significant bottleneck for practical applications. To address this challenge, we propose a Deep opinion-Unaware BIQA model by learning and adapting from Multiple Annotators, termed DUBMA, thereby eliminating the need for human annotations. Specifically, we first generate a large-scale set of distorted image pairs and then assign relative quality rankings using existing full-reference IQA models. The resulting dataset is subsequently employed for training our DUBMA. Due to the inherent discrepancies between synthetic and real-world distortions, a domain shift may occur. To address this, we propose an outlier-robust unsupervised domain adaptation approach leveraging optimal transport. This strategy effectively reduces the gap between synthetic and real-world distortion domains, thereby boosting the model’s adaptability and overall performance. Extensive experiments show that DUBMA outperforms existing opinion-unaware BIQA methods in terms of prediction accuracy across multiple datasets.
Keywords:
Computer Vision: CV: Low-level Vision
Computer Vision: CV: Computational photography