Objective image quality assessment (IQA) plays a crucial role in various multimedia applications and is generally categorized into three distinct types: Full-Reference IQA (FR-IQA), Reduced-Reference IQA (RR-IQA), and No-Reference IQA (NR-IQA). FR-IQA involves a direct comparison between a distorted image and its original reference image to evaluate quality. RR-IQA, in contrast, relies on partial information from reference images to assess the quality of the target images. NR-IQA, also known as blind image quality assessment (BIQA), is indispensable in situations where reference images are unavailable, such as at the receiver’s end or for user-generated content on social media platforms. The increasing prevalence of such platforms has driven a significant rise in the demand for BIQA. BIQA is critical for estimating the perceptual quality of images without reference, making it particularly relevant in the context of user-generated content and mobile applications, where reference images are typically not accessible.

The challenge of BIQA lies in its need to handle a wide variety of content and the presence of multiple types of distortions. Although many BIQA methods leverage deep neural networks (DNNs) and incorporate saliency detectors to improve performance, their large model sizes pose significant limitations for deployment on resource-constrained devices.

To overcome these challenges, we propose a novel non-deep-learning BIQA method, termed Green Saliency-guided Blind Image Quality Assessment (GSBIQA). GSBIQA is distinguished by its compact model size, low computational requirements, and strong performance. The method integrates a lightweight saliency detection module that aids in data cropping and decision ensemble processes, generating features that effectively mimic the human attention mechanism. The GSBIQA framework is composed of five key processes: 1) green saliency detection, 2) saliency-guided data cropping, 3) GreenBIQA feature extraction, 4) local patch prediction, and 5) saliency-guided decision ensemble. Experimental results demonstrate that GSBIQA achieves performance comparable to state-of-the-art DL-based methods while requiring significantly fewer computational resources, making it an efficient and effective solution for BIQA tasks in resource-constrained environments.