ZhiruoZhou

Welcome MCL New Member Ganning Zhao

Could you briefly introduce yourself and your research interests?

My name is Ganning Zhao. I was born in Weifang, China. This is my first semester here in USC, pursuing master’s degree on Electrical Engineering. I graduated from Guangdong University of Technology, majoring in automation. In my undergraduate, I did a project about audio signal processing using convolutional neural network with my friends and I became interested in signal processing and machine learning since then. My current research interest is image processing and computer vision.

What is your impression about MCL and USC?

I like the campus of USC, because the buildings are beautiful. Particularly, there are many libraries in USC and many of them are exquisite. People in MCL are intelligent and work very hard. I’m also impressed that they are very nice and always willing to help with each other. Professor Kuo is also very nice and caring about everyone in lab. I like the atmosphere here in MCL.

What is your future expectation and plan in MCL?

My research topic in this summer is texture synthesis. I hope I can do a good job in this topic. More importantly, because everyone in MCL are intelligent, I hope I can learn a lot from them during communication and improve my research ability. Also, I want to make more friends here in MCL.

By |June 28th, 2020|News|Comments Off on Welcome MCL New Member Ganning Zhao|

Welcome MCL New Member Jiamian Wang

Could you briefly introduce yourself and your research interests?

I’m Jiamian Wang, a graduate student of USC Ming Hsieh Department of Electrical and Computer Engineering. I got my bachelor’s degree at Tianjin University, China. My research interests are computer vision and image processing. I participated in the projects of doing image semantic segmentation, doing outlier-detection using CNNs and creating the image generating models. I’m interested in exploring the mathematical logits behind different algorithms.

What is your impression about MCL and USC?

USC provides many high-quality courses for us, from which I got a solid basis and rich experience.  Also, the community provides us with many chances and platforms, through which we share information and communicate with each other effectively. Because of this I know MCL, in which a new theory about computer vision has been proposed and explored. I’m honored to join the MCL and work with the excellent students and professors here.

What is your future expectation and plan in MCL?

Computer vision and image processing is my long-time interest and MCL is the excellent lab specialized for this. For this reason, I want to take this chance and devote as much time and effort onto my work as I can. Hopefully, summer intern is a good start for me and in the future, I want to be one of the formal members of MCL as a PhD student.

By |June 21st, 2020|News|Comments Off on Welcome MCL New Member Jiamian Wang|

Welcome MCL New Member Zheng Wen

Could you briefly introduce yourself and your research interests?

My name is Zheng Wen, I’m currently pursuing my Master’s degree at USC in Electrical Engineering. I received my Bachelor’s degree in Beihang University, Beijing, China. I’m very curious in math and programming. I found it interesting to do something related matrix and machine learning, so I’m very glad that I Professor Kuo could give me this opportunity to join MCL and dive deeper in image processing, computer vision and machine learning.

What is your impression about MCL and USC?

This is my first year in USC, I found the campus is very beautiful even though kind of small. Leavey Library is a good place to study when we are struggling for the final exams of a semester. I really enjoy the American Chinese cuisine offered by the canteen. MCL is a big family. I get to know the members in MCL by EE569, Professor Kuo is an expert in multimedia, and his understanding in deep learning impressed me a lot. TAs are very experience in image processing and willing to help us.

What is your future expectation and plan in MCL?

In MCL, I want to do some project in image forensics with the help of Yao, learn more about image processing and machine learning, also sharpen my programming skills and make some progress to the domain. I will try my best to get involved in MCL and make friends with everyone.

By |June 14th, 2020|News|Comments Off on Welcome MCL New Member Zheng Wen|

Welcome MCL New Member Zhiwei Deng

Could you briefly introduce yourself and your research interests?

My name is Zhiwei Deng, I am a first year MSEE student at USC. I came from Anhui, China. I received my bachelor’s degree from Shanghai University in 2019. I had some experience in computer vision and machine learning fields during my undergraduate years. In this summer, I will continue to work on these areas, specifically on the topic of 3D point cloud classification and segmentation.

What is your impression about MCL and USC?

USC has a beautiful campus and serious research environment. I really enjoy both the natural views of the campus and the historical buildings with red bricks. I think MCL is a lab with creativity and courage. Old things are challenged, and new things are created here. I think mathematics and logics are quite important for the members of MCL. Also, I feel very honored and motivated to work new things out with these excellent people.

What is your future expectation and plan in MCL?

The most important thing of this summer for me is to have a clearer view in this field and have a deeper understand in certain topics in image processing. Also, I hope I can contribute to some projects during the internship. For the future career plan, I want to dig deeper in these areas in the future and pursue a higher level of academic degree.

By |June 7th, 2020|News|Comments Off on Welcome MCL New Member Zhiwei Deng|

Congratulations to Heming Zhang for passing her defense!

Let us hear what she wants to say about her defense and an abstract of her thesis.

Deep learning techniques utilize networks with multiple layers cascaded to map the inputs to desired outputs. To map the entire inputs to desired outputs, useful information should be extracted through the layers. During the mapping, feature extraction and prediction are jointly performed. We do not have direct control for feature extraction. Consequently, some useful information, especially local information, is also discarded in the process.

In this thesis, we specifically study local-aware deep learning techniques from four different aspects: 1) Local-aware network architecture 2) Local-aware proposal generation 3) Local-aware region analysis 4) Local-aware supervision

Specifically, we design a multi-modal attention mechanism for generative visual dialogue system, which simultaneously attends to multi-modal inputs and utilizes extracted local information to generate dialogue responses. We propose a proposal network for fast face detection system for mobile devices, which detects salient facial parts and uses them as local cues for detection of entire faces. We extract representative fashion features by analyzing local regions, which contain local fashion details of humans’ interests. We develop a fashion outfit compatibility learning method, which models each outfit as a graph and learns outfit compatibility using both global and local supervisions on the graphs.

I would like to thank Prof. Kuo and all the lab members for their help. I have learned a lot through my PhD journey and I want to share some feelings and experiences. One essential part of the doctoral training is mental training, from which I have become more persistent, self-disciplined and motivated. As this journey may take several years, maintaining a balanced life is very important. I wish the best to all the lab members and [...]

By |January 26th, 2020|News|Comments Off on Congratulations to Heming Zhang for passing her defense!|

MCL Research on Behavior Analysis of Stressed CNNs

CNNs have demonstrated effectiveness in many applications. However, few efforts have been made to understand CNNs. To better explain the behaviors of convolutional neural networks (CNNs), we adopt an experimental methodology with simple datasets and networks in this research. Our study includes three steps: 1) design a sequence of experiments; 2) observe and analyze network behaviors; and 3) present conjectures as learned lessons from the study. In particular, we wish to examine the behaviors under limited resources, including limited amount of labeled data and limited network size. First, we examine the effect of limited labeled data. Semi-supervised learning deals with the case where limited labeled data and abundant unlabeled data are available. Co-training is one of the techniques. In this part, we also focus on how CNNs behave under co-training. Second, to facilitate easier analysis of the roles of individual layers, we adopt a very simple LeNet-5-like network in our experiments. We adjust the number of filters in each layer and analyze the effect. In particular, we wish to show how differently networks with limited resources (i.e., when the number of filters is very small) and networks with rich resources behave in the following four aspects of CNNs:

Scalability: How does the network respond to datasets of different sizes?
Non-convexity: Is the performance of a network stable against different initializations of the network parameters?
Overfit: Is there a big gap between training and test accuracies?
Robustness: Is the classification result sensitive to small perturbation to the input?

 

An important contribution of our work is the investigation into the resource-sparse networks. Most works on CNN adopted networks with very rich resources. In our work, we also look into how the networks behave under [...]

By |January 12th, 2020|News|Comments Off on MCL Research on Behavior Analysis of Stressed CNNs|
  • Permalink Gallery

    MCL Research on Fashion Compatibility Recommendation (Jiali Duan)

MCL Research on Fashion Compatibility Recommendation (Jiali Duan)

In the task of fashion compatibility prediction, the goal is to pick an item from a candidate list to complement a partial outfit in the most appealing manner. Existing fashion compatibility recommendation work comprehends clothing images in a single metric space and lacks detailed understanding of users’ preferences in different contexts. To address this problem, we propose a novel Metric-Aware Explainable Graph Network (MAEG). In MAEG, we leverage a Latent Semantic Extraction Network (LSEN) to obtain representations of items in the metric-aware latent semantic space. Then, we develop a graph filtering network and Pairwise Preference Attention (PPA) module to model the interactions between users’ preferences and contextual information. With MAEG, we can provide recommendation to users as well as explain how each item and factor contribute to the final prediction. Extensive experiments on two large-scale real-world datasets reveal that MAEG not only outperforms the state-of-the-art methods, but also provides interpretable insights by highlighting the role of semantic attributes and contextual relationships among items.

By |January 5th, 2020|News|Comments Off on MCL Research on Fashion Compatibility Recommendation (Jiali Duan)|

Merry Christmas and Happy New Year

2019 has been a fruitful year for MCL. Some members graduated with impressive research work and began a new chapter of life. Some new students joined the MCL family and explored the joy of research. MCL members have made great efforts on their research and published quality research papers on top journals and conferences.

Wish all MCL members a happy new year!

 

Image credits:

Image 1: http://www.sohu.com, resized; Image 2: http://www.sohu.com, resized.

By |December 29th, 2019|News|Comments Off on Merry Christmas and Happy New Year|

Professor Kuo Delivered Invited Lecture at Kyoto University

MCL Director, Professor Kuo, gave an invited speech at Kyoto University on December 19, 2019. The title of his speech was “From Feedforward-Designed Convolutional Neural Networks (FF-CNNs) to Successive Subspace Learning (SSL)”.  Professor Kuo’s visit to Kyoto University was hosted by Professor Tatsuya Kawahara. The lecture was also an event of IEEE SPS Kansai Chapter.

The abstract of his speech is given below. “Given a convolutional neural network (CNN) architecture, its network parameters are typically determined by backpropagation (BP). The underlying mechanism remains to be a black-box after a large amount of theoretical investigation. In this talk, I will first describe a new interpretable feedforward (FF) design with the LeNet-5 as an example. The FF-designed CNN is a data-centric approach that derives network parameters based on training data statistics layer by layer in a one-pass feedforward manner. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fully-connected (FC) layers, we adopt a label-guided linear least squared regression (LSR) method. To generalize the FF design idea furthermore, we present the notion of “successive subspace learning (SSL)” and present a couple of concrete methods for image and point cloud classification. Experimental results are given to demonstrate the competitive performance of the SSL-based systems. Similarities and differences between SSL and deep learning (DL) are compared.”

By |December 22nd, 2019|News|Comments Off on Professor Kuo Delivered Invited Lecture at Kyoto University|

MCL Research on Successive Subspace Learning

Subspace methods have been widely used in signal/image processing, pattern recognition, computer vision, etc.   One may use a subspace to denote the feature space of a certain object class, (e.g., the subspace of the dog object class) or the dominant feature space by dropping less important features (e.g., the subspace obtained via principal component analysis or PCA). The subspace representation offers a powerful tool for signal analysis, modeling and processing. Subspace learning is to find subspace models for concise data representation and accurate decision making based on training samples.

Most existing subspace methods are conducted in a single stage. We may ask whether there is an advantage to perform subspace learning in multiple stages. Research on generalizing from one-stage subspace learning to multi-stage subspace learning is rare. Two PCA stages are cascaded in the PCAnet, which provides an empirical solution to multi-stage subspace learning. Little research on this topic may be attributed to the fact that a straightforward cascade of linear multi-stage subspace methods, which can be expressed as the product of a sequence of matrices, is equivalent to a linear one-stage subspace method. The advantage of linear multi-stage subspace methods may not be obvious from this viewpoint.

Yet, multi-stage subspace learning may be worthwhile under the following two conditions. First, the input subspace is not fixed but growing from one stage to the other. For example, we can take the union of a pixel and its eight nearest neighbors to form an input space in the first stage. Afterward, we enlarge the neighborhood of the center pixel from 3×3 to 5×5 in the second stage.  Clearly, the first input space is a proper subset of the second input space. By generalizing it to multiple stages, [...]

By |November 3rd, 2019|News|Comments Off on MCL Research on Successive Subspace Learning|