Despite prolific work on evaluating generative models, little research has been done on the quality evaluation of an individual generated sample. To address this problem, a lightweight generated sample quality evaluation (LGSQE) method is proposed in this work. In the training stage of LGSQE, a binary classifier is trained on real and synthetic samples, where real and synthetic data are labeled by 0 and 1, respectively. In the inference stage, the classifier assign soft labels (ranging from 0 to 1) to each generated sample. The value of soft label indicates the quality level; namely,the quality is better if its soft label is closer to 0. LGSQE can serve as a post-processing module for quality control. Furthermore, LGSQE can be used to evaluate the performance of generative models, such as accuracy, AUC, precision and recall, by aggregating sample-level quality. Experiments are conducted on CIFAR-10 and MNIST to demonstrate that LGSQE can preserve the same performance rank order as that predicted by the Frechet Inception Distance (FID) but with significantly lower complexity.

Fig. 1 shows the pipeline of the proposed method. The LGSQE method consists of three cascaded modules:

Module 1: Representation Learning. effective local and global representations of images are learned based upon PixelHop framework.

Module 2: Discriminant Feature Test (DFT). Use DFT to choose powerful features from large numbers of representations obtained from Module 1 against a particular task.

Module 3: Binary Classification for Evaluation. We partition the real/generated data into training and testing sets. A binary classifier is trained on the union of real and generated training samples, which are labeled with “0” and “1”, respectively. The classifier assigns a soft score, to each testing sample as the sample quality index.

Fig. 2 shows the evaluation of generated samples. A soft score (the probability of a sample belonging to class “one”) is assigned to each generated sample as a quality index by LGSQE. The soft score histograms with generator models “Diffusion StyleGAN2” and “Styleformer” for CIFAR-10 are shown in Fig. 2 (a) and (b), respectively.

 

— By Ganning Zhao