Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In this work, we resolve a big challenge that most current image quality metrics (IQMs) are unavailable across different image contents, especially simultaneously coping with natural scene (NS) images or screen content (SC) images. By comparison with existing works, this paper deploys on-line and off-line data for proposing a unified no-reference (NR) IQM, not only applied to different distortion types and intensities but also to various image contents including classical NS images and prevailing SC images. Our proposed NR IQM is developed with two data-driven learning processes following feature extraction, which is based on scene statistic models, free-energy brain principle, and human visual system (HVS) characteristics. In the first process, the scene statistic models and an image retrieve technique are combined, based on on-line and off-line training instances, to derive a novel loose classifier for retrieving clean images and helping to infer the image content. In the second process, the features extracted by incorporating the inferred image content, free-energy and low-level perceptual characteristics of the HVS are learned by utilizing off-line training samples to analyze the distortion types and intensities and thereby to predict the image quality. The two processes mentioned above depend on a gigantic quantity of training data, much exceeding the number of images applied to performance validation, and thus make our model's performance more reliable. Through extensive experiments, it has been validated that the proposed blind IQM is capable of simultaneously inferring the quality of NS and SC images, and it has attained superior performance as compared with popular and state-of-the-art IQMs on the subjective NS and SC image quality databases. The source code of our model will be released with the publication of the paper at https://kegu.netlify.com.
Boi Faltings, Ljubomir Rokvic, Panayiotis Danassis
Touradj Ebrahimi, Michela Testolina, Davi Nachtigall Lazzarotto, Vlad Hosu