CNNs have demonstrated effectiveness in many applications. However, few efforts have been made to understand CNNs. To better explain the behaviors of convolutional neural networks (CNNs), we adopt an experimental methodology with simple datasets and networks in this research. Our study includes three steps: 1) design a sequence of experiments; 2) observe and analyze network behaviors; and 3) present conjectures as learned lessons from the study. In particular, we wish to examine the behaviors under limited resources, including limited amount of labeled data and limited network size. First, we examine the effect of limited labeled data. Semi-supervised learning deals with the case where limited labeled data and abundant unlabeled data are available. Co-training is one of the techniques. In this part, we also focus on how CNNs behave under co-training. Second, to facilitate easier analysis of the roles of individual layers, we adopt a very simple LeNet-5-like network in our experiments. We adjust the number of filters in each layer and analyze the effect. In particular, we wish to show how differently networks with limited resources (i.e., when the number of filters is very small) and networks with rich resources behave in the following four aspects of CNNs:

  1. Scalability: How does the network respond to datasets of different sizes?
  2. Non-convexity: Is the performance of a network stable against different initializations of the network parameters?
  3. Overfit: Is there a big gap between training and test accuracies?
  4. Robustness: Is the classification result sensitive to small perturbation to the input?

 

An important contribution of our work is the investigation into the resource-sparse networks. Most works on CNN adopted networks with very rich resources. In our work, we also look into how the networks behave under very limit resources. We hope our observations on the resource-scarce networks could provide inspirations for research on the size reduction of convolutional neural networks, which reduces computational cost.

 

Image source:
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE86(11), 2278-2324.
Kuo, C. C. J., Zhang, M., Li, S., Duan, J., & Chen, Y. (2019). Interpretable convolutional neural networks via feedforward design. Journal of Visual Communication and Image Representation60, 346-359.