Congratulations to Yueru Chen for passing her Qualifying Exam
Congratulations to Yueru Chen for passing her Qualifying Exam on Jan. 23, 2019! The title of her Ph.D. thesis proposal is “OBJECT CLASSIFICATION BASED ON NEURAL-NETWORK-INSPIRED IMAGE TRANSFORMS”. Her Qualifying Exam Committee includes: Jay Kuo (Chair), Antonio Ortega, Shri Narayanan, Keith Chugg and Ulrich Neumann (Outside Member).
Abstract of thesis proposal:
Convolutional neural networks (CNNs) have recently demonstrated impressive performance in image classification and change the way building feature extractors from carefully handcrafted design to automatically deep learned from a large labeled dataset. However, a great majority of current CNN literature are application-oriented, and there is no clear understanding and theoretical foundation to explain the outstanding performance and indicate the way to improve. In this thesis proposal, we focus on solving the image classification problem based on the neural-network-inspired Saak (Subspace approximation with augmented kernels) transform and Saab (Subspace approximation with adjusted bias) transform.
Based on the lossy Saak transform, we firstly proposed an efficient, scalable and robust approach to the handwritten digits recognition problem. We conduct a comparative study on the performance of the LeNet-5 and the Saak-transform-based solutions in terms of scalability and robustness as well as the efficiency of lossless and lossy Saak transform under a comparable accuracy level. We also develop an ensemble method that fuses the output decision vectors of Saab-transform-based decision system (i.e. the FF-CNN model) to solve the image classification problem. To enhance the performance of the ensemble system, it is critical to increasing the diversity of FF-CNN models. To achieve this objective, we introduce diversities by adopting three strategies: 1) different parameter settings in convolutional layers, 2) flexible feature subsets fed into the Fully-connected (FC) layers, and 3) multiple image embeddings of the same input source. Furthermore, we partition [...]