News

MCL Research on Sentence Similarity Modeling

Sentence similarity evaluation has a wide range of applications in natural language processing, such as semantic similarity computation, text generation evaluation, and information retrieval. As one of the word-alignment-based methods, Word Mover’s Distance (WMD)[1] formulates text similarity evaluation as a minimum-cost flow problem. It finds the most efficient way to align the information between text sequences through a flow network defined by word-level similarities. By assigning flows to individual words, WMD computes text dissimilarity as the minimum cost of moving words’ flows from one sentence to another based on pre-trained word embeddings.

However, a naive WMD method does not perform well on sentence similarity evaluation for several reasons.
– First, WMD assigns word flow based on words’ frequency in a sentence. This frequency-based word weighting scheme is weak in capturing word importance when considering the statistics of the whole corpus.
– Second, the distance between words solely depends on the embedding of isolated words without considering the contextual and structural information of input sentences. Since the meaning of a sentence depends on individual words as well as their interaction, simply considering the alignment between individual words is deficient in evaluating sentence similarity.

MCL proposed a new syntax-aware word flow calculation method, Syntax-aware Word Mover’s Distance (SynWMD)[2], for sentence similarity evaluation.
– Words are first represented as a weighted graph based on the co-occurrence statistics obtained by dependency parsing trees. Then, a PageRank-based algorithm is used to infer word importance.
– The word distance model in WMD is enhanced by the context extracted from dependency parse trees, which is illustrated in Figure 1. The contextual information of words and structural information of sentences are explicitly modeled as additional subtree embeddings.
– As shown in Table 1, we [...]

By |September 4th, 2022|News|Comments Off on MCL Research on Sentence Similarity Modeling|

MCL Research on Green Blind Image Quality Assessment

Image quality assessment (IQA) aims to evaluate image quality at various stages of image processing such as image acquisition, transmission, and compression. Based on the availability of undistorted reference images, objective IQA can be classified into three categories [1]: full-reference (FR), reduced-referenced (RR) and no-reference (NR). The last one is also known as blind IQA (BIQA). FR-IQA metrics have achieved high consistency with human subjective evaluation. Many FR-IQA methods have been well developed in the last two decades such as SSIM [2] and FSIM [3]. RR-IQA metrics only utilize features of reference images for quality evaluation. In some application scenarios (e.g., image receivers), users cannot access reference images so that NR-IQA is the only choice. BIQA methods attract growing attention in recent years.

Generally speaking, conventional BIQA methods consist of two stages: 1) extraction of quality-aware features and 2) adoption of a regression model for quality score prediction. As the amount of user generated images grows rapidly, a handcrafted feature extraction method is limited in its power of modeling a wide range of image content and distortion characteristics. Deep neural networks (DNNs) achieve great success in blind image quality assessment (BIQA) with large pre-trained models in recent years. However, their solutions cannot be easily deployed at mobile or edge devices, and a lightweight solution is desired.

In this work, we propose a novel BIQA model, called GreenBIQA, that aims at high performance, low computational complexity and a small model size. GreenBIQA adopts an unsupervised feature generation method and a supervised feature selection method to extract quality-aware features. Then, it trains an XGBoost regressor to predict quality scores of test images. We conduct experiments on four popular IQA datasets, which include two synthetic-distortion and two authentic-distortion [...]

By |August 30th, 2022|News|Comments Off on MCL Research on Green Blind Image Quality Assessment|

MCL Research on Effective Knowledge Graph Embedding

Knowledge Graph encodes human-readable information and knowledge in graph format. Triples, denoted by (h,r,t), are basic elements of a KG, where h and t are head and tail entities while r is the relation connecting them. Both manual effort by domain experts and automated information extraction algorithms have contributed to the creation of many existing Knowledge Graphs today. However, given the limited information accessible to each individual and the limitation of algorithms, it is nearly impossible for a Knowledge Graph to perfectly capture every single piece of facts about the world. As such, Knowledge Graphs are often incomplete and many researchers have developed different algorithms to predict missing facts in Knowledge Graphs. Knowledge Graph Embedding models were first proposed to mainly solve the Knowledge Graph Completion problem. Besides, embedding models can also be useful in solving many downstream tasks such as entity classification and entity alignment.

MCL has been recently working on Effective Knowledge Graph Embedding. Translation, rotation, and scaling are three commonly used geometric manipulation operations in image processing. Besides, some of them are successfully used in developing effective knowledge graph embedding (KGE) models such as TransE and RotatE. Inspired by the synergy, we propose a new KGE model by leveraging all three operations. Since translation, rotation, and scaling operations are cascaded to form a compound one, the new model is named CompoundE. By casting CompoundE in the framework of group theory, we show that quite a few distanced-based KGE models are special cases of CompoundE. CompoundE extends the simple distance-based scoring functions to relation-dependent compound operations on head and/or tail entities. To demonstrate the effectiveness of CompoundE, we conduct experiments on three popular knowledge graph completion datasets. Experimental results show that CompoundE consistently [...]

By |August 22nd, 2022|News|Comments Off on MCL Research on Effective Knowledge Graph Embedding|

MCL Research on Semi-Supervised Feature Learning

Traditional machine learning algorithms are susceptible to the curse of feature dimensionality [1]. Their computational complexity increases with high dimensional features. Redundant features may not be helpful in discriminating classes or reducing regression error, and they should be removed. Sometimes, redundant features may even produce negative effects as their number grows. Their detrimental impact should be minimized or controlled. To deal with these problems, feature learning techniques using feature selection are commonly applied as a data pre-processing step or part of the data analysis to simplify the complexity of the model. Feature selection techniques involve the identification of a subspace of discriminant features from the input, which describe the input data efficiently, reduce effects from noise or irrelevant features, and provide good prediction results.
Inspired by information theory and the decision tree, a novel supervised feature selection methodology is proposed recently in MCL. The resulting tests are called the discriminant feature test (DFT) and the relevant feature test (RFT) for classification and regression tasks, respectively [2]. The proposed methods belong to the filter methods of feature selection, which give a score to each dimension and select features based on feature ranking. The scores are measured by the weighted entropy and the weighted MSE for DFT and RFT, which reflect the discriminant power and relevance degree to classification and regression targets, respectively. It is shown by experimental results that DFT and RFT can select a lower dimensional feature subspace distinctly and robustly while maintaining high decision performance.
The proposed methods work well in the semi-supervised scenario, where useful feature set learnt in a limited number of labeled data has high intersection over union (IoU) compared to giving the full set of labeled training data. Examples [...]

By |August 15th, 2022|News|Comments Off on MCL Research on Semi-Supervised Feature Learning|

MCL Research on Supervision-Scalable Object Recognition

Supervised learning is the main stream in pattern recognition, computer vision and natural language processing nowadays due to the great success of deep learning. On one hand, the performance of a learning system should improve as the number of training samples increases. On the other hand, some learning systems may benefit more than others from a large number of training samples. For example, deep neural networks (DNNs) often work better than classical learning systems that contain feature extraction and classification two stages. How the quantity of labeled samples affects the performance of learning systems is an important question in the data-driven era.

In fact, humans can learn effectively in a weakly supervised setting. In contrast, deep learning networks often need more labeled data to achieve good performance. What makes weak supervision and strong supervision different? There is little study on the design of supervision-scalable leaning systems. Is it possible to design a supervision-scalable learning system? Recently, MCL researchers attempt to shed light on these questions by choosing the object recognition problem as an illustrative example [1]. The design of two learning systems are presented that demonstrate an excellent scalable performance with respect to various supervision degrees. The first one adopts the classical histogram of oriented gradients (HOG) features, while the second one named improved PixelHop (IPHop) uses successive-subspace-learning (SSL) features [2]. The scalable learning system consists of three modules: representation learning, feature learning, and decision learning. In the second and the third modules, different designs are proposed to be adaptive to different supervision levels. Specifically, variance thresholding based feature selection and kNN classifier are used when the training size is small, while when the training size becomes larger, Discriminant Feature Test (DFT) [...]

By |August 9th, 2022|News|Comments Off on MCL Research on Supervision-Scalable Object Recognition|

Welcome Joseph Lin to Join MCL as a Summer Intern

In Summer 2022, we have a new MCL member, Joseph Lin, joining our big family. Here is a short interview with Joseph with our great welcome.

1. Could you briefly introduce yourself and your research interests?
I’m Joseph Lin, a rising first year master’s student at USC in electrical engineering. I completed a bachelor’s degree at UCLA in computer science and am looking forward to advanced studies on the other side of town. I became interested in machine learning and computer vision during my undergraduate studies and I hope to deepen my understanding of fundamental machine learning and focus on efficiency and interpretability in future research.

2. What is your impression about MCL and USC?
I’ve only talked to a couple people at MCL so far, but it’s been impressive how tightly run this group is. Everyone seems very motivated and knowledgeable, especially Professor Kuo. On the other hand, to put it bluntly, my impression of USC as a school is bad because I’m coming from a rival football school, but I’m sure that will change very soon.

3. What is your future expectation and plan in MCL?
I’m finishing up a summer project and getting my first direct contribution to a paper so I’m very excited about that. In the coming two years, I will work as hard as I can in my studies and hopefully have many opportunities to collaborate with other members and put out meaningful research.

By |August 1st, 2022|News|Comments Off on Welcome Joseph Lin to Join MCL as a Summer Intern|

MCL Research on Green Facial Expression Recognition

The problem of facial expression recognition (FER) attempts to understand human emotion through facial image analysis. The technique can be applied to driver status monitoring, affective computing, and serious games. Solutions to FER can be categorized into two types: conventional methods and deep-learning-based (DL-based) methods. While conventional methods use hand-extracted features, DL-based methods conduct end-to-end optimization of certain networks whose performance highly depends on training data, the network architecture and the cost function. DL-based methods have become popular in recent years because of their higher performance. Yet, they demand a large model size. Although there has been research on reducing the number of parameters of DL models, it does not solve the computational complexity problem completely.

In this research, we are interested in a lightweight FER solution and name it ExpressionHop. ExpressionHop has low computational and memory complexity so that it is most suitable for mobile and edge computing environments. As shown in Figure 1, ExpressionHop consists of four modules: 1) cropping patches out based on facial landmarks, 2) applying filter banks to each patch to generate a rich set of joint spatial-spectral features, 3) conducting the discriminant feature test (DFT) to select features of higher discriminant power, and 4) performing the final classification task with a classifier. We conduct performance benchmarking on ExpressionHop, traditional and deep learning methods on several commonly used FER datasets such as JAFFE, CK+ and KDEF. Experimental results in table 1 show that ExpressionHop achieves comparable or better classification accuracy. Yet, its model size only has 30K parameters, which is significantly lower than those of deep learning methods.

As to the future research directions, there are several extensions to be pursued. First, it is desired to extend ExpressionHop to non-frontal [...]

By |July 25th, 2022|News|Comments Off on MCL Research on Green Facial Expression Recognition|

Welcome Yuhuai Liu to Join MCL as A Summer Intern

In Summer 2022, we have a new MCL member, Yuhuai Liu, joining our big family. Here is a short interview with Yuhuai with our great welcome.

1. Could you briefly introduce yourself and your research interests?

My name is Yuhuai Liu, I’m a Master student studying Electrical Engineering at University of Southern California. Before this, I was interested in Machine Learning and Computer Vision. In EE569 I learned a brand new learning method which is Green Learning from Prof. Kuo. And I am really impressed by this work. So I decided to join MCL to keep discovering this new learning method.

2. What is your impression about MCL and USC?

The people in MCL are all very smart and friendly people. Each of them is passionate about research and has excellent research projects. Also, I appreciate Professor Guo’s educational style. He put a lot of effort into each topic in the lab and guided the students enthusiastically,

3. What is your future expectation and plan in MCL?

I hope I can learn more about Green Learning and have some work at MCL this summer. I also hope to meet more people in MCL.

By |July 17th, 2022|News|Comments Off on Welcome Yuhuai Liu to Join MCL as A Summer Intern|

Professor Kuo Elected as An Academician of Academia Sinica

MCL Director, Professor C.-C. Jay Kuo, was elected as one of 19 Academia Sinica’s 33rd Academicians. The news was announced on July 7, 2022. Professor Kuo was cited for his contributions to the fields of “multimedia computing” and “data science and engineering”.

Academia Sinica means ’Chinese Academy’. Its Chinese name is 中央硏究院. It was founded in 1928 in Nanjing and relocated to in Nangang, Taipei, in 1949. It is the national academy of the Republic of China (Taiwan). Academia Sinica supports research activities in a wide variety of disciplines, ranging from mathematical & physical sciences, to life sciences, and to humanities and social sciences. As an educational institute, it provides PhD training and scholarship through its English-language Taiwan International Graduate Program.

Professor Kuo acknowledged MCL alumni for his achievements, saying that “This honor is not only a recognition of me but also the outstanding performance of MCL alumni all over the world.” Furthermore, Professor Kuo was thankful to the strong support of the University of Southern California, the Viterbi School of Engineering, and the Ming-Hsieh Department of Electrical and Computer Engineering in the last three decades. He said, “Without the strongest support of the university, school and department, this would not happen at all.”

By |July 11th, 2022|News|Comments Off on Professor Kuo Elected as An Academician of Academia Sinica|

Welcome Jiahao Gu to Join MCL as A Summer Intern

In Summer 2022, we have a new MCL member, Jiahao Gu, joining our big family. Here is a short interview with Jiahao with our great welcome.

Jiahao Gu is currently a master student in Electrical Engineering at USC. He received his bachelor’s degree in Nanjing University of Posts and Telecommunications in 2020.His research interests include point cloud, machine learning and computer vision.

1. Could you briefly introduce yourself and your research interests?

My name is Jiahao Gu. I received my bachelor’s degree in Communication Engineering from Nanjing University of Posts and Telecommunications in 2020. I will be a summer intern at MCL. In my spare time, I enjoy reading and traveling. Some of my research interests include machine learning, point cloud and computer vision.

2. What is your impression about MCL and USC?

MCL is a great place to research. People here are friendly, intelligent and hard working. Professor Kuo is responsible to every student including master students like me. Every week, there will be a seminar and people will have lunch together, which is a great chance for us to share ideas and communicate with each other.

3. What is your future expectation and plan in MCL?

For this summer, I will work with Pranav on point cloud odometry. I hope I can improve our method and get better performance. I am looking forward to learning a lot under the guidance of Professor Kuo and Pranav. After the summer internship, I hope I can keep working closely with Professor Kuo.

By |July 3rd, 2022|News|Comments Off on Welcome Jiahao Gu to Join MCL as A Summer Intern|