ZhiruoZhou

Welcome Tsung-Shan Yang to Join MCL as a new PhD student

We are so happy to welcome a new graduate member of MCL, Tsung-Shan Yang. Here is an interview with Tsung-Shan:

 

Could you briefly introduce yourself and your research interests?

I am Tsung-Shan Yang. I am pursuing my Ph.D. degree in Electrical Engineering at USC now. I received my Bachelor’s and Master’s degrees in Electrical Engineering from National Taiwan University. During my graduate studies, I researched alleviating distortions and analyzing saliency maps in panoramic images. My research interests include 3D computer vision and machine learning.

What is your impression about MCL and USC?

USC is a prominent educational institution over the world, especially MCL. Being one of the most prestigious research groups on campus, this group proposes plenty of novel and sound approaches to challenging engineering problems. There are brilliant and outstanding people at MCL, and it is my honor to work with them.

What is your future expectation and plan in MCL?

The most crucial thing for me is learning how to define a question. As an engineer, I want to find the technical problem in life and address the issues. Besides, I want to broaden my horizon by discussing with the talented members of MCL. After the training in MCL, I hope I will be able to solve real-world difficulties theoretically and practically.

By |October 2nd, 2022|News|Comments Off on Welcome Tsung-Shan Yang to Join MCL as a new PhD student|

Welcome Haiyi Li to Join MCL as a new PhD student

We are so happy to welcome a new graduate member of MCL, Haiyi Li. Here is an interview with Haiyi:

 

Could you briefly introduce yourself and your research interests?

My name is Haiyi Li. I am currently a Ph.D student in Electrical Engineering at USC. I received my bachelor’s degree in Automation Engineering from the University of Electronic Science and Technology of China in 2022. I enjoy swimming and outdoor sports in my spare time. My research interests include image processing and machine learning.

What is your impression about MCL and USC?

MCL is a fantastic place to exchange thoughts and obtain inspiration. Professor Kuo is a knowledgeable and passionate advisor, providing us with a lot of new ideas. And he is also a patient and responsible instructor, offering me some research directions to dig deeper. Also, students in MCL are really nice and intelligent. I feel inspired when discussing questions with them. USC is a vibrant campus. I am impressed by the strong academic resources and diverse environment here.

What is your future expectation and plan in MCL?

I plan to have more inspiring discussions with Professor Kuo and senior students of MCL and lay a sound foundation for my research. And I will focus on some specific image processing research directions to conduct some hands-on projects to make models more reasonable with better performance. I hope I am able to equip myself with mature research ability and insightful ideas.

By |September 25th, 2022|News|Comments Off on Welcome Haiyi Li to Join MCL as a new PhD student|

Welcome Aolin Feng to Join MCL as a new PhD student

We are so happy to welcome a new graduate member of MCL, Aolin Feng. Here is an interview with Aolin:

 

Could you briefly introduce yourself and your research interests?

My name is Aolin Feng. I received my bachelor’s and master’s degrees from University of Science and Technology of China (USTC). I developed my research interest in video compression when pursuing master’s degree. I join USC MCL lab to do further research in image/video processing-related area.

What is your impression about MCL and USC?

My impression about MCL lab is that it is such a big family. The atmosphere here is kind of serious but lively – people here are serious about academics but lively in life. Professor Kuo leads a lab full of creativity and passion. For USC, I like the campus, which is beautiful and has its own style. The culture here is diverse and the people I met are all friendly. I look forward to the study and life at USC.

What is your future expectation and plan in MCL?

I expect to broaden my research horizons and explore more interesting and cutting-edge directions. I wish I could learn a lot from Professor Kuo and the students in the lab. Besides, I wish to strengthen my mathematical foundation from course study and research.

By |September 18th, 2022|News|Comments Off on Welcome Aolin Feng to Join MCL as a new PhD student|
  • Permalink Gallery

    MCL PixelHop Paper Received the 2022 Best Paper Award from JVCI

MCL PixelHop Paper Received the 2022 Best Paper Award from JVCI

Congratulations to MCL Alumnus, Dr. Yueru Chen, and Director, Professor Jay Kuo, for receiving the 2022 Best Paper Award from the Journal of Visual Communication and Image Representation for their work:

Yueru Chen and C.-C. Jay Kuo, “PixelHop: a successive subspace learning (SSL) method for object recognition,” Journal of Visual Communication and Image Representation, Vol. 70, July 2020, 102749.

The PixelHop paper proposed a successive subspace learning (SSL) framework for unsupervised feature representation. It lays a key foundation for green learning. Professor Kuo said, “Deep learning has been very dominating in the computer vision and image analysis field in the last 10 years. It was not easy for Yueru to pursue a totally different research direction in her PhD research. I am glad to see that her effort on developing an interpretable and modularized learning system has been gradually recognized by the community.”

MCL has received three best paper awards (2018, 2021, 2022) and two best paper award runner-ups (2019, 2020) from the Journal of Visual Communication and Image Representation in the last five years. The other four papers are listed below.

The 2021 Best Paper Award of the Journal of Visual Communication and Image Representation.

C.-C. Jay Kuo, Min Zhang, Siyang Li, Jiali Duan and Yueru Chen, “Interpretable convolutional neural networks via feedforward design,” the Journal of Visual Communication and Image Representation, Vol. 60, pp. 346-359, April 2019.

The 2020 Best Paper Award Runner-up of the Journal of Visual Communication and Image Representation.

C.-C. Jay Kuo and Yueru Chen, “On data-driven Saak transform,” the Journal of Visual Communication and Image Representation, Vol. 50, pp. 237-246, January 2018.

The 2019 Best Paper Award Runner-up of the Journal of Visual Communication and Image Representation.

Ronald Salloum, Yuzhou Ren [...]

By |September 11th, 2022|News|Comments Off on MCL PixelHop Paper Received the 2022 Best Paper Award from JVCI|

MCL Research on Advanced Deepfake Video Detection

A robust fake satellite image detection method, called Geo-DefakeHop, is proposed in this work. Geo-DefakeHop is developed based on the parallel subspace learning (PSL) methodology. PSL maps the input image space into several feature subspaces using multiple filter banks. By exploring response differences of different channels between real and fake images for filter banks, Geo-DefakeHop learns the most discriminant channels and uses their soft decision scores as features. Then, Geo-DefakeHop selects a few discriminant features by validation dataset from each filter bank and ensembles them to make a final binary decision. Geo-DefakeHop offers a light-weight high-performance solution to fake satellite images detection. Its model size is analyzed, which ranges from 0.8 to 62K parameters. Furthermore, it is shown by experimental results that it achieves an F1-score higher than 95% under various common image manipulations such as resizing, compression, and noise corruption.

— By Max Chen

By |March 27th, 2022|News|Comments Off on MCL Research on Advanced Deepfake Video Detection|

MCL Research on Graph Learning

Graph-based semi-supervised learning has shown prominent performance in node classification task by exploiting the underlying manifold structure of data. Recently, an enhancement on the classical label propagation (LP) named GraphHop is proposed, which has outperformed the existing graph convolutional networks (GCNs) on various networks. Although the superior performance in GraphHop model is explained in the view of smoothening both node attribute and label signals, its mechanisms are still not fundamentally clear.

In this work, we develop deeper insights into the the GraphHop model from the point of regularization framework. We show that GraphHop model can be cast into an iterative approximated optimization of a particular regularization function on graphs. Then, based on this variational interpretation, we propose two approaches to address the limits in the GraphHop model due to the approximated optimization process. In particular, these are 1) additional aggregations in optimizing the label embeddings; 2) adaptively selecting of the reliable unlabeled samples for the classifier training. Experiments show that equipped with these two improvements, our model called GraphHop++ is able to gain significantly better performance than the former GraphHop model, in addition to the state-of-the-art methods on various benchmark networks with limited label rates.

— By Tian Xie

By |March 20th, 2022|News|Comments Off on MCL Research on Graph Learning|

MCL Research on Green Progressive Learning

Image classification has been studied for many years as a fundamental problem in computer vision. With  the development of convolutional neural networks (CNNs) and the availability of larger scale datasets, we see a rapid success in the classification using deep learning for both low- and high-resolution images. Although being effective, one major challenge associated with deep learning is that its underlying mechanism is not transparent. Being inspired by deep learning, the successive subspace learning (SSL) methodology was proposed by Kuo et.al. in a sequence of papers. Different from deep learning, SSL-based methods learn feature representations in an unsupervised feedforward manner using multi-stage principle component analysis (PCA). Joint spatial-spectral representations are obtained at different scales through multi-stage transforms.

Applying the existing SSL-based model the classification takes usage of all the data at a time for the training, which is a single-round approach. Among the samples, there are easy samples which is usually of a high ratio in the dataset, and a portion of hard samples. Easy samples can achieve quite high conditional accuracy, while hard samples need further attention as the distribution are masked by the easy sample. This motivates the design of Green Progressive Learning, which adds more rounds of training progressive to zoom in to smaller and smaller subspace of hard samples. The selection of training samples to train the progressive learning in each round is critical to the performance gain. In each learning round, the hard training samples are re-selected to represent the subspace. Experiments on MNIST and Fashion-MNIST show the potential of progressive learning, which can help boost the performance of difficult cases.

— By Yijing Yang

Reference:

Chen and C.-C. J. Kuo, “Pixelhop: A successive subspace learning (ssl) method for object recognition,” Journal [...]

By |March 13th, 2022|News|Comments Off on MCL Research on Green Progressive Learning|

MCL Research on Subspace Learning Machine

Classification-oriented machine learning models have been well-studied in the past decades. The focus has shifted to deep learning (DL) in recent years. Feature learning and classification are handled jointly in DL models. Although the best performance of classification tasks is often achieved by DL through back propagation (BP), DL models suffer from lack of interpretability, high computational cost and high model complexity. Feature extraction and classification are treated as separate modules in classical machine learning. We focus on the classical learning paradigm and propose a new high-performance classifier with features as the input. Examples of classical classifiers include support vector machine (SVM), decision tree (DT) , multilayer perceptron(MLP) feedforward multilayer perceptron(FF-MLP) and extreme learning machine (ELM). SVM, DT and FF-MLP share one common idea, i.e., feature space partitioning. Inspired by the MLP, the DT and the ELM, a new classification model, called the subspace learning machine (SLM), is proposed aiming at general classification tasks.

The SLM attempts to efficiently partition the input feature space into multiple discriminant subspaces in a hierarchical manner and it works as follows: First, SLM identifies a discriminant subspace by examining the discriminant power of input features. Then, it applies random projections to input discriminant subspace features to yield p 1D subspaces and finds optimal partitions in each of them. This is equivalent to partitioning input space with p hyper-planes whose orientations and biases are determined by random projections and partitions, respectively.  Among p projections, we develop a criterion to choose the best q partitions that yield 2q partitioned subspaces. The subspace partitioning process is repeated at each child node.  When the samples are sufficiently pure at a child node, the partitioning process stops and SLM makes final predictions. SLM offers [...]

By |March 6th, 2022|News|Comments Off on MCL Research on Subspace Learning Machine|

MCL Research on Unsupervised Nuclei Segmentation

Nuclei segmentation is a consequential task in biological image analysis, helping in the reading process of histology images. Different attributes, such as shape, population, cluster formation and density play a significant role in clinical practice for cancer diagnosis and its aggressiveness level assessment.  Given that the annotation of this data is carried out by expertized pathologists who reportedly [2] need to spend on average 120-150 hours to annotate 50 image patches (about 12M pixels), one can realize that annotated data are in scarcity. That is a big impediment for supervised methods, particularly for DL-based solutions that need massive annotated data to learn generalizable representations. Moreover, the annotations have a high inter-observer variation which is subject to the experience of the annotator [1]. On top of that, nuclei color and texture variations across images from different laboratories and multiple organs further widen the gap between train and test domains.

Given the aforementioned limitations, a natural way to solve the problem is to pursue an unsupervised line of research. Also, given the limited number of annotated data, our proposed method decouples from the DL paradigm and utilizes conceptually simpler techniques that make the pipeline more transparent in terms of segmentation decision making. It is mainly based on prior knowledge about the nuclei segmentation problem. CBM [3] pipeline starts out with a data-driven Color (C) transform, to highlight the nuclei cell regions over the background, followed by an adaptive Binarization (B) process built on the bi-modal assumption in each local region. That process is being run in a patch-wise manner, to leverage the local distribution assumptions between background and foreground. The final part of the pipeline uses Morphological (M) transformations that refines the segmented output based on certain [...]

By |February 27th, 2022|News|Comments Off on MCL Research on Unsupervised Nuclei Segmentation|

MCL Research on Learning-based Image Coding

Traditional image coding has achieved great success within four decades. Image coding standards have been developed and widely used today such as JPEG and JPEG-2000. Furthermore, intra coding schemes of modern video coding standards also provide very effective image coding solutions. Several powerful tools have been used to de-correlate the pixel values:
1. Block transform coding, which is used in the majority of the codecs where images are partitioned into blocks of different sizes and pixel values in blocks are transformed from the spatial domain to the spectrum domain for energy compaction before quantization and entropy coding.
2. Intra prediction, as another powerful tool that reduces the pixel correlation using pixel values from neighboring blocks at a low cost. Residuals after intra prediction are still coded by block transform coding.

Recently, deep-learning-based compression methods have attracted a lot of attention due to their superior rate-distortion performance. Compared with the traditional codecs, learned based codec has the following characteristic:
1. Inter correlations: Traditional image codecs only explore correlation in the same image while learning-based image codecs can exploit correlation from other images (i.e., inter-image correlation).
2. Multi-scale representation: Traditional image codecs only capture the representation with variable block size while learning-based image codecs can exploit the multi-scale representation based on pooling. In other words, traditional image codecs primarily explore correlation at the block level while learning-based image codecs can exploit short, middle, and long-range correlations using the multi-scale representation.
3. Advanced loss functions: different loss functions can be easily designed in learning-based schemes to fit the human visual system (HVS) and attention can be introduced to the learning-based schemes conveniently.

To achieve low-complexity learning-based image coding, we propose a multi-grid multi-block-size vector quantization (MGBVQ) method based on [...]

By |February 20th, 2022|News|Comments Off on MCL Research on Learning-based Image Coding|