One fundamental problem of the convolutional neural network(CNN) is catastrophic forgetting, which occurs when new object classes and data are added while the original dataset is not available anymore. Training the network only using the new dataset deteriorates the performance with respect to the old dataset. To overcome this problem, we propose an expanded network architecture, called the ExpandNet, to enhance the CNN incremental learning capability. Our solution keeps filters of the original networks on one hand, yet adds additional filters to the convolutional layers as well as the fully connected layers on the other hand.

The proposed new architecture does not need any information of the original dataset, and it is trained using the new dataset only. Extensive evaluations based on the CIFAR-10 and the CIFAR-100 datasets show that the proposed method has a slower forgetting rate as compared to several existing incremental learning networks.

As a further extension, modifications such as pruning can be used to reduce the size of the proposed ExpandNet. Also, the Saak transform was recently proposed in [1]. It is worthwhile to compare the Saak-transform-based approach and the ExpandNet approach with respect to the new dataset.

Reference:

[1] C-C Jay Kuo and Yueru Chen, “On data-driven saak transform,” arXiv preprint arXiv:1710.04176, 2017.

 

Image credits:

1. Image showing an illustration of the incremental learning problem.

2. Image showing the network architecture of the proposed ExpandNet, where new trainable filters added to the convolutional layers and FC layers are shown in orange while the original filters are shown in blue.

 

By Shanshan Cai, an alumna of MCL