weiwang

MCL Research on SSL-based Graph Learning

In this research, we proposed an effective and explainable graph vertex classification method, called GraphHop. Unlike the graph convolutional network (GCN) that is based on the end-to-end optimization, the GraphHop method generates an effective feature set for each vertex in an unsupervised and feedforward manner. GraphHop determines the local-to-global attributes of each vertex through successive one-hop information exchange, called the GraphHop unit. The GraphHop method is mathematically transparent. It can be explained using the recently developed “successive subspace learning (SSL)” framework [1, 2], which is mathematically transparent. Unlike GCN that is based on the end-to-end optimization of an objective function using back propagation, GraphHop generates an effective feature set for each vertex in an unsupervised and feedforward manner. Since no backpropagation is required in the feature learning process, the training complexity of GraphHop is significantly lower. By following the traditional pattern recognition paradigm, the GraphHop method decouples the feature extraction task and the classification task into two separate modules, where the feature extraction module is completely un-supervised. In the feature extraction module, GraphHop determines the local-to-global attributes of each vertex through successive one-hop information exchange, called the GraphHop unit. To control the rapid increase of the dimension of vertex attributes, the Saab transform is adopted for dimension reduction inside the GraphHop unit. Multiple Graph-Hop units are cascaded to obtain the higher order proximity information of a vertex. In the classification module, vertex attributes of multiple GraphHop units are extracted and ensembled for the classification task. There are many machine learning tools to be considered. In the experiments, we choose the random forest classifier because of its good performance and low complexity. To demonstrate the effectiveness of the GraphHop method, we apply it to three real-world [...]

By |March 17th, 2020|News, Research|Comments Off on MCL Research on SSL-based Graph Learning|

MCL Research Presented at WACV 2020

MCL member, Junting Zhang presented her paper at 2020 Winter Conference on Applications of Computer Vision (WACV ’20), in Snowmass village, Colorado. The title of Junting’s paper is “Class-incremental Learning via Deep Model Consolidation”, with Jie Zhang, Shalini Ghosh, Dawei Li, Serafettin Tasci, Larry Heck, Heming Zhang, C.-C. Jay Kuo as co-authors. Here is a brief summary of Junting’s paper:

“Deep neural networks (DNNs) often suffer from “catastrophic forgetting” during incremental learning (IL) — an abrupt degradation of performance on the original set of classes when the training objective is adapted to a newly added set of classes. Existing IL approaches tend to produce a model that is biased towards either the old classes or new classes, unless with the help of exemplars of the old data. To address this issue, we propose a class-incremental learning paradigm called Deep Model Consolidation (DMC), which works well even when the original training data is not available. The idea is to first train a separate model only for the new classes, and then combine the two individual models trained on data of two distinct set of classes (old classes and new classes) via a novel double distillation training objective. The two existing models are consolidated by exploiting publicly available unlabeled auxiliary data. This overcomes the potential difficulties due to the unavailability of original training data. Compared to the state-of-the-art techniques, DMC demonstrates significantly better performance in image classification (CIFAR-100 and CUB-200) and object detection (PASCAL VOC 2007) in the single-headed IL setting.”

Junting was also invited to attend the WACV 2020 Doctoral Consortium (WACVDC) to present her research and progress to date. She also shared this experience with us:

“It was a great opportunity to interact with experienced researchers in [...]

By |March 8th, 2020|News|Comments Off on MCL Research Presented at WACV 2020|

Welcome Professor Junsong Yuan’s Visit to USC/MCL

Prof. Junsong Yuan visited USC/MCL on Feb. 25, and delivered a talk on “Beyond Deep Recognition: Discovering Visual Patterns in Big Visual Data ”. Thanks to the success of deep learning, many computer vision tasks nowadays are formulated as regression problems, which, however, often relies on large amounts of annotated training data to make the high-dimensional regression successful. In this talk, Prof. Yuan discussed a complementary yet overlooked problem beyond deep visual recognition and regression. He addressed why and how to discover visual patterns in images and videos that are not annotated, e.g., unsupervised and weakly-supervised visual learning and pattern discovery, and explore how to utilize them to better model, search, and interpret big visual data. Applications in visual search, object detection, action recognition, and video analytics were also explored. 

Junsong Yuan is an Associate Professor and Director of Visual Computing Lab of CSE Department, State University of New York at Buffalo. Before that he was an Associate Professor at Nanyang Technological University (NTU), Singapore. He received his PhD from Northwestern University and M.Eng. from National University of Singapore. He is currently Associate Editor of IEEE Trans. on Image Processing (T-IP) and Machine Vision and Applications (MVA), and Senior Area Editor of Journal of Visual Communication and Image Representation (JVCI), and served as program co-chair for ICME 2018 and area chair for CVPR/ACM MM/WACV/ACCV/ICIP/ICPR etc. He received Best Paper Award from IEEE Trans. on Multimedia, Nanyang Assistant Professorship from NTU, and Outstanding EECS Ph.D. Thesis award from Northwestern University. He is a Fellow of International Association of Pattern Recognition (IAPR).

By |March 2nd, 2020|News|Comments Off on Welcome Professor Junsong Yuan’s Visit to USC/MCL|

Welcome New MCL Member Hamza Ghani

We are so glad to welcome our new MCL member, Hamza Ghani! Here is a short interview with Tian:

1. Could you briefly introduce yourself and your research interests?

My name is Hamza Ghani and I am from Austin, Texas. I am a graduate student here at USC pursuing a Masters in Electrical Engineering. I went to UT Austin for my undergrad which was in ECE focusing on computer engineering. I also currently work full-time as a Data Scientist while pursuing my Masters. My research interests include: Machine Learning, Graphs and GANs.

2. What is your impression about MCL and USC?

All the members I’ve met in MCL are very knowledgeable in several topics. I am definitely learning a lot by interacting with everyone.  Additionally, everyone I’ve worked with in MCL has been great/enjoyable to work with. I want to thank Professor Kuo for giving me a chance to join the MCL lab and I don’t think my USC experience would be the same without MCL. USC has been great so far, the campus is really nice and it’s easy to make friends even outside of my major.

3. What is your future expectation and plan in MCL?

My current goal is to successfully complete the project my team is currently working on regarding model compression. Overall I want to keep learning through research work, publish papers and make connections with my peers in the lab. 

By |February 25th, 2020|News|Comments Off on Welcome New MCL Member Hamza Ghani|

Welcome New MCL Member Tian Xie

We are so glad to welcome our new MCL member, Tian Xie! Here is a short interview with Tian:

1. Could you briefly introduce yourself and your research interests?

My name is Tian Xie, and I am a third-year Ph.D. student at MCL lab in the department of Electrical Engineering at USC. Prior to joining MCL, I was a Ph.D. student at the InfoLab of USC. I received my Bachelor’s degree in mathematics from Fudan University of China. I am interested in representation learning and deep learning. Previously I worked on research projects related to graph and adversarial learning.

2. What is your impression about MCL and USC?

USC is a small but beautiful campus. I really enjoy walking around the campus and having some coffee around the school cafe. The MCL lab is a wonderful place with a caring and supportive advisor and a large group of young talented students. I feel more motivated and enthusiastic about my research after joining MCL and I really enjoy talking with Professor Kuo since he is a really wise person.

3. What is your future expectation and plan in MCL?

I want to make friends in MCL, do good research and write papers. Hopefully, my research can contribute to the progress of the related field.

By |February 17th, 2020|News|Comments Off on Welcome New MCL Member Tian Xie|

Welcome New MCL Member Yuhang Xu

We are so glad to welcome our new MCL member, Yuhang Xu! Here is a short interview with Yuhang:

1. Could you briefly introduce yourself and your research interests?

My name is Yuhang Xu. I am a graduate student at USC pursuing a MS degree in Electrical Engineering. My research interests include machine learning and image processing. Recently, I am working on a neural network compression project under the supervision of Prof. Kuo. In my free time I enjoy reading news from around the world, listening to country music, and cooking Chinese food.

2. What is your impression about MCL and USC?

MCL is a mature research group with more than 20 passionate and hardworking individuals. It is prolific and well-organized under the supervision of Prof. Kuo. Prof. Kuo is filled with knowledge and is an inspiration to his students. USC is the perfect balance of academic and social opportunities. During my time at USC, I make friends with people from different cultures.

3. What is your future expectation and plan in MCL?

My short-term goal is to complete the current project. It is an interesting one and it has special meaning for me since it is my first project in MCL. I also hope to create strong connections with people in the lab.

By |February 7th, 2020|News|Comments Off on Welcome New MCL Member Yuhang Xu|

Welcome New MCL Member Zohreh Azizi

We are so glad to welcome our new MCL member, Zohreh Azizi! Here is a short interview with Zohreh:

1. Could you briefly introduce yourself and your research interests?

My name is Zohreh Azizi. I am a PhD student in Electrical Engineering. Before joining USC, I did my bachelors in Sharif University of Technology, Iran. Previously, my research experience was focused on designing biomedical devices. While developing software for devices, I became more familiar with AI, Machine learning, and topics like computer vision, which I found really interesting. I appreciate Prof. Kuo for giving me the chance to join MCL and have the opportunity to explore my interest.

2. What is your impression about MCL and USC?

I can’t believe how every single member in MCL is so nice and helpful. They all work hard and behave in a professional manner. There is so much for me to learn from everyone in MCL, and especially from Prof. Kuo, who is really caring, motivating, and hardworking. USC has a beautiful campus and a lively environment.

3. What is your future expectation and plan in MCL?

I have lots of things to learn. I am so excited to work hard and gain more skills and explore new ideas. I would like to solve significant problems in computer vision and machine learning. I hope that I can contribute to MCL both by my research and by helping my fellow mates.

By |February 2nd, 2020|News|Comments Off on Welcome New MCL Member Zohreh Azizi|

MCL Research on Graph Embedding

Graph is a data representation model. Each data point is considered as a node and an edge/connection exists between nodes if there is any common characteristics. The relationship that exists between nodes is complex and attracts research in this domain. Several techniques have been developed like Deep Walk, Planetoid, Chebyshev, Graph Convolution Network, Graph Attention Network, Large Scale Graph Convolution Network, and so on, which focuse on exploring the behavior of the nodes based on their connectivity to different nodes. Graph models are often designed for tasks like Node classification, Edge/Link prediction, and has varied applications in social network, citation networks.

Currently we are developing a Graph Neural Network model for node classification task of a Graph. Feedforward based approach is adopted to learn the network parameters in a single forward pass using Graph Hop Method. The main idea is to learn the node’s representation making use of their hop’s (neighboring node’s) representation to better represent and learn from local to global attribute perspective through information exchange between the hops, by subsequently growing the dimension of the feature vector and reducing the dimensionality using SaaB transform.

Unlike the methods/techniques which are already developed, our model’s computational complexity is very low for the fact that no back propagation is utilized for learning the parameters of the network model, but through feedforward design the model learns in a single forward pass. The Graph Hop Method serves as a unique method for driving the model to train on very less training samples yet provide better accuracies/results for testing samples. Thus, the model is capable to train on very limited labelled data. Making use of only 5% of training samples, we are able to achieve the state of art performance [...]

By |September 8th, 2019|News|Comments Off on MCL Research on Graph Embedding|

MCL Research on Image-based Object Recognition

The subspace technique has been widely used in signal/image processing, pattern recognition, computer vision, etc. It may have different definitions in different contexts. A subspace may denote a dominant feature space where less relevant features are dropped. One example is the principal component analysis (PCA). A subspace may also refer to a certain object class such as the subspace of digit “0” in the MNIST dataset.  Generally speaking, subspace methods offer a powerful and popular tool for signal analysis, modeling and processing.  They exploit statistical properties of a class of underlying signals so as to determine a smaller yet significant subspace for further processing.

However, existing subspace methods are conducted in a single stage.  We may wonder whether there is any advantage to perform subspace methods in multiple stages. Research on generalizing from one-stage subspace methods to multi-stage subspace methods is actually rare.  Two PCA stages are cascaded in a straightforward manner in the PCAnet[1]. Being motivated by multiple convolutional layers in convolutional neural networks (CNNs), Prof. Kuo proposed a new machine learning paradigm, called successive subspace learning (SSL). It has multiple subspace modules in cascade by mimicking the feedforward CNN operations, and the parameters of subspace transformation are learned from the training data.  Although there is a strong similarity between the feedforward paths of CNNs and the SSL approach, they are fundamentally different in the machine learning model formulation, the training process and complexity.

To illustrate the SSL approach furthermore, Yueru Chen and Prof. Kuo proposed a PixelHop method based on SSL for image-based object recognition. It consists of three steps: 1) local-to-global attributes of images are extracted through multi-hop information exchange, 2) subspace-based dimensionality reduction (SDR) is adopted to new image representation from each [...]

By |September 2nd, 2019|News|Comments Off on MCL Research on Image-based Object Recognition|

MCL Research on Texture Analysis & Modeling

Texture is a one of the most fundamental yet important characteristic of images, and texture analysis & modeling is an essential and challenging problem in computer vision and pattern recognition, which has attracted extensive research attention over the last several decades.

As a powerful visual cue, texture play an important role in human perception, and provides useful information in identifying objects or regions in images, ranging from multispectral satellite data to microscopic images of tissue samples. Besides, understanding texture is also a key component in many other computer vision topics, including image de-noising, image super-resolution and image generation.

In the past few years, MCL has done original research works in several important aspects of texture analysis & modeling, including texture representation, unsupervised texture segmentation, and dynamic texture synthesis.

Texture Representation[1]: A hierarchical spatial-spectral correlation (HSSC) method is proposed for texture analysis in this work. The HSSC method first applies a multi-stage spatial-spectral transform to input texture patches, which is known as the Saak transform. Then, it conducts a correlation analysis on Saak transform coefficients to obtain texture features of high discriminant power. During the correlation analysis, both auto-correlation and cross-correlation are computed, and further used to get compact and representative feature for texture. Given the fact that texture is the spatial organization of a set of basic patterns, we further provide theoretical explanation of proposed method, that it attempts to capture the energy distribution of orthogonal texture patterns derived from the Saak transform. This paper has been accepted by ICIP2019.

Unsupervised Texture Segmentation[2]: We propose a data-centric approach to efficiently extract and represent textural information for unsupervised texture segmentation problem. Based on the strong self-similarities and quasi-periodicity in texture images, the proposed method first constructs a representative texture [...]

By |August 25th, 2019|News|Comments Off on MCL Research on Texture Analysis & Modeling|