MCL Research on Active Learning
Deep learning has shown its effectiveness in various computer vision tasks. However, a large amount of labeled data is usually needed for deep learning approaches. Active learning can help reduce the labeling efforts by choosing the most informative samples to label and thus achieves a comparable performance with less labeled data.
There are two major types of active learning strategy: uncertainty based and diversity based.
The core idea of uncertainty based methods is to label those samples that are most uncertain to the existing model trained on current labeled set. For example, an image with a prediction of 50 percent cat is empirically considered to be more valuable than an image with a prediction of 99 percent cat, where the former has larger uncertainty. Besides uncertainty metrics from information theory like entropy, Beluch et al. [1] proposes to use an ensemble to estimate the uncertainty of unlabeled images and achieves good results in ImageNet dataset.
In contrast, diversity based methods rely on an assumption that a more diverse set of images chosen as the training set can lead to better performance. Sener et al. [2] formalizes the active learning problem into a core-set problem and achieves competitive performance in CIFAR-10 dataset. Mixed-integer programming is used to solve their objective function.
Our current research focuses on balancing the two factors (uncertainty and diversity) in a explainable way.
References:
[1] Beluch, William H., et al. “The power of ensembles for active learning in image classification.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[2] Sener, Ozan, and Silvio Savarese. “Active learning for convolutional neural networks: A core-set approach.” (2018).
Author: Yeji Shen
Share This Story, Choose Your Platform!
About the Author: Xuejing Lei
Thesis Title: Green Image Generation and Label Transfer Techniques, January 2024.
Employment: Amazon.com, Inc., Palo Alto, California, USA
The 175th PhD from MCL