Professor C.-C. Jay Kuo gave a keynote speech in the IEEE Conference on Intelligent Signal Processing and Communication Systems held in Xiamen, China on November 7th. The title of his talk is “Why Deep Learning Networks Work So Well?” The abstract of his talk is given below.
“Deep learning networks, including convolution and recurrent neural networks (CNN and RNN), provide a powerful tool for image, video and speech processing and understanding nowadays. However, their superior performance has not been well understood. In this talk, I will unveil the myth of CNNs. To begin with, I will describe network architectural evolution in three generations: first, the McClulloch and Pitts (M-P) neuron model and simple networks (1940-1980); second, the artificial neural network (ANN) (1980-2000); and, third, the modern CNN (2000-Present). The differences between these three generations will be clearly explained. Next, theoretical foundations of CNNs have been studied from the approximation, the optimization and the signal representation viewpoints, and I will present main results from the signal processing viewpoint. A good theoretical understanding of deep learning networks provides valuable insights into the past, the present and the future of their research and applications.”
Understanding CNNs is one of the main activities of the MCL in last 3-4 years. Several PhD students and post-docs have made contributions to this topic, including Hao Xu and Yueru Chen.