News

Welcome MCL New Member Jiahui Zhang

We have a new member, Jiahui Zhang, joining MCL in Spring 2021. Here is a short interview with Jiazhi with our great welcome.

1. Could you briefly introduce yourself and your research interests?

My name is Jiahui Zhang, I am a second-year master student in the Department of Electronic Engineering in USC. I got my bachelor degree from Beijing University of Technology. I am a sports fan. In my spare time, I like playing sports and watching sports games. I also like traveling to view good scenery. My research interests include deep learning, computer vision especially representation learning.

2. What is your impression about MCL and USC?

USC is a great school that could provide student a enjoyable environment on living, communicating and studying.

MCL is a wonderful lab filled with a number of intelligent researchers. Everyone is an expert in their research field. Besides, people in MCL lab from Professor Kuo to every lab member are very kind and friendly. People help each other on living, studying, and researching, which build a warm environment in the lab.

3. What is your future expectation and plan in MCL?

MCL has many great and excellent researchers, and I want to study and make friends with them. For academic, I want to accomplish some projects to accumulate my research experiences and make contribution to the lab.

By |January 24th, 2021|News|Comments Off on Welcome MCL New Member Jiahui Zhang|

MCL Research on New Interpretation of MLP

Our work on new MLP interpretation includes:

Interpretable MLP design [1]:

A closed-form solution exists in two-class linear discriminant analysis (LDA), which discriminates two Gaussian-distributed classes in a multi-dimensional feature space. In this work, we interpret the multilayer perceptron (MLP) as a generalization of a two-class LDA system so that it can handle an input composed by multiple Gaussian modalities belonging to multiple classes. Besides input layer lin and output layer lout, the MLP of interest consists of two intermediate layers,  l1 and l2.  We propose a feedforward design that has three stages: 1) from lin to l1: half-space partitionings accomplished by multiple parallel LDAs, 2) from l1 to l2: subspace isolation where one Gaussian modality is represented by one neuron, 3) from l2 to lout: class-wise subspace mergence, where each Gaussian modality is connected to its target class. Through this process, we present an automatic MLP design that can specify the network architecture (i.e., the layer number and the neuron number at a layer) and all filter weights in a feedforward one-pass fashion.  This design can be generalized to an arbitrary distribution by leveraging the Gaussian mixture model (GMM). Experiments are conducted to compare the performance of the traditional backpropagation-based MLP (BP-MLP) and the new feedforward MLP (FF-MLP).

MLP as a piecewise low-order polynomial approximator [2]:

The construction of a multilayer perceptron (MLP) as a piecewise low-order polynomial approximator using a signal processing approach is presented in this work. The constructed MLP contains one input, one intermediate and one output layers. Its construction includes the specification of neuron numbers and all filter weights. Through the construction, a one-to-one correspondence between the approximation of an MLP and that of a piecewise low-order polynomial is established. Comparison [...]

By |January 17th, 2021|News|Comments Off on MCL Research on New Interpretation of MLP|

MCL Research on AI for Health Care

Research related to the future development of Health Care systems is always a significant endeavor, by touching many people lives. AI advancements in the last decade have given rise to new applications, with key aim to increase the automation level of different tasks, currently being carried out by experts. In particular, medical image analysis is a fast-growing area, having also been revolutionized by modern AI algorithms for visual content understanding. Magnetic Resonance Imaging (MRI) is widely used by radiologists in order to shed more light on patient’s health situation. It can provide useful cues to experts, thus assisting to take decisions about the appropriate treatment plan, maintaining also less discomfort for the patient and incurring less economical risks in the treatment process.

The question arises, how modern AI could contribute to automate the diagnosis process and provide a second and more objective assessment opinion to the experts. Many research ideas from the visual understanding area, adopt the deep learning (DL) paradigm, by training Deep Neural Networks (DNNs) to learn end-to-end representations for tumor classification, lesion areas detection, specific organ segmentation, survival prediction etc. Yet, one could identify some limitations on using DNNs in medical image analysis. It is well known that it is often hard to collect sufficient real samples for training DL models. Furthermore, decisions made by machines need to be transparent to physicians and especially be aware of the factors that led to those decisions, so that they are more trustworthy. DNNs are often perceived as “black-box” models, since their feature representations and decision paths are hard to be interpreted.

In MCL, we consider a new line of research on AI for medical image analysis, by adopting the Green Learning (GL) approach to address [...]

By |January 10th, 2021|News|Comments Off on MCL Research on AI for Health Care|

MCL Research on Scalable Weakly-Supervised Graph Learning

The success of deep learning and neural networks often comes at the price of a large number of labeled data. Weakly-supervised learning (WSL) is an important paradigm that leverages a large number of unlabeled data to address this limitation. The need for WSL has arisen in many machine learning problems and found wide applications in computer vision, natural language processing, and graph-based modeling, where getting labeled data is expensive and there exists a large amount of unlabeled data.

Among weakly-supervised graph learning methods, label propagation (LP) has demonstrated good adaptability, scalability, and efficiency for node classification. However, LP-based methods are limited in their capability of integrating multiple data modalities for effective learning. Due to the recent success of neural networks, there has been an effort of applying neural networks into graph-structured data. One pioneering technique, known as graph convolutional networks (GCNs), has achieved impressive node classification performance for citation networks. However, GCNs fail to exploit the label distribution in the graph structure and difficult to scale for large graphs.

In this work, we propose a scalable weakly-supervised node classification method on graph-structured data, called GraphHop, where the underlying graph contains attributes of all nodes but labels of few nodes. Our method is an iterative algorithm that overcomes the deficiencies in LP and GCNs. With proper initial label vector embeddings, each iteration contains two steps: 1) label aggregation and 2) label update. In Step 1, each node aggregates its neighbors’ label vectors obtained in the previous iteration. In Step 2, a new label vector is predicted for each node based on the label of the node itself and the aggregated label information obtained in Step 1. This iterative procedure exploits the neighborhood information and enables GraphHop to [...]

By |January 3rd, 2021|News|Comments Off on MCL Research on Scalable Weakly-Supervised Graph Learning|

MCL Technology Outlook: Green Learning

Substantiality has become a main theme of science and technology in recent years. As civilization continues to develop, humans need be conscious in keeping the environment clean for future generations. As scientists and engineers of the 21st century, it is our destiny to keep green technologies as one of the top priorities. In the area of artificial intelligence and machine learning, it is urgent to explore a novel green machine learning technology, which is competitive with deep learning in performance yet with significantly lower power consumption in training and inference.

Green learning will be the central focus of the USC Media Communications Lab (MCL) in the next decade. Professor Kuo, Director of MCL, has been devoted to this subject since 2015. A sequence of papers on green learning systems has been published. Examples include: PixelHop, PointHop, FaceHop, GraphHop, GenHop, etc. These solutions have common characteristics, including low power consumption, small model sizes, weak supervision and scalability. The underlying principle of MCL’s green learning solutions is successive subspace learning (SSL).

MCL will continue to push the envelope of green learning and develop effective green solutions for natural language processing, knowledge understanding, computer vision, joint audio-visual processing, and 3D data processing.

By |December 27th, 2020|News|Comments Off on MCL Technology Outlook: Green Learning|

Merry Christmas and Happy New Year

2020 has been a fruitful year for MCL. Some members graduated with impressive research work and began a new chapter of life. Some new students joined the MCL family and explored the joy of research. MCL members have made great efforts on their research and published quality research papers on top journals and conferences.

Merry Christmas. Wish all MCL members a happy new year!

 

Image credits:

Image 1: https://freepik.com, resized; Image 2: https://www.homemade-gifts-made-easy.com/, resized.

By |December 20th, 2020|News|Comments Off on Merry Christmas and Happy New Year|
  • Permalink Gallery

    Congratulations to Professor Kuo for Being Elected as NAI Fellow

Congratulations to Professor Kuo for Being Elected as NAI Fellow

Congratulations to MCL Director, Professor C.-C. Jay Kuo, for being elected as a Fellow of the National Academy of Inventors (NAI). The announcement was made by the NAI President, Dr. Paul R. Sanberg, on December 8.

This year’s class includes three professors at the USC Viterbi School of Engineering: Gerald Loeb, professor of biomedical engineering and neurology; Keith Chugg, professor of electrical and computer engineering; and Jay Kuo, distinguished professor of electrical and computer engineering and computer science.

The 2020 NAI Fellow class has 175 academic innovators from across the world. It represents 115 research universities and governmental and non-profit research institutes worldwide. They collectively hold over 4,700 issued U.S. patents. Among the 2020 Fellows are recipients of the National Academies of Sciences, Engineering, and Medicine, American Academy of Arts & Sciences, and Nobel Prize, as well as other honors and distinctions. Their collective body of research covers a range of scientific disciplines including biomedical engineering, computer engineering, materials science, and physics.

With the election of the 2020 class, there are now 1,403 NAI Fellows worldwide, representing more than 250 prestigious universities and governmental and non-profit research institutes. To date, NAI Fellows hold more than 42,700 issued U.S. patents, which have generated over 13,000 licensed technologies and companies, and created more than 36 million jobs. In addition, over $2.2 trillion in revenue has been generated based on NAI Fellow discoveries.

By |December 13th, 2020|News|Comments Off on Congratulations to Professor Kuo for Being Elected as NAI Fellow|

Professor Kuo Delivered Tencent Keynote Speech at VCIP 2020

MCL Director, Professor C.-C. Jay Kuo, gave an opening keynote at the IEEE International Conference on Visual Communications and Image Processing (VCIP) on December 2, 2020. The meeting would originally be held from December 1-4, 2020, in Macau. However, due to the COVID-19 pandemic, it became a virtual one. The keynote is titled with “Interpretable and Effective Learning for 3D Point Cloud Registration, Classification and Segmentation.” Here is the abstract:

“3D point cloud analysis and processing find numerous applications in computer-aided design, 3D printing, autonomous driving, etc. Most state-of-the-art point cloud processing methods are based on convolutional neural networks (CNNs). Although they outperform traditional methods in terms of accuracy, they demand heavy supervision and higher training complexity. Besides, they lack mathematical transparency. In this talk, I will present three interpretable and effective machine learning methods for 3D point cloud registration, classification and segmentation, respectively. First, an unsupervised registration method that extracts salient points for matching is presented. Second, an unambiguous way to order points sequentially in a point cloud set is developed. Then, their spatial coordinates can be treated as geometric attributes of 1D data array. This idea facilitates the classification task. Third, for the segmentation task, we show how to leverage prior knowledge on point clouds to derive an intuitive and effective segmentation method. Extensive experiments are conducted to demonstrate the performance of the three new methods. I will also provide performance benchmarking between these interpretable methods and deep learning methods.”

The keynote was well attended with many questions during the 10-minute Q&A session. Professor Kuo’s keynote was sponsored by Tencent and called the Tencent Keynote Speech.

By |December 7th, 2020|News|Comments Off on Professor Kuo Delivered Tencent Keynote Speech at VCIP 2020|

Happy Thanksgiving!

At this time of Thanksgiving celebration, hope everyone stay safe during the pandemic and have a good time with their beloved families or friends. It’s also a good time to take a rest to think back on our fulfillments this year and be thankful to those who support us during this hard time. Thanks to every MCL member for the collaboration and hard work throughout this year to keep the research activities ongoing smoothly!

Happy Thanksgiving!

 

Images credit to WallpaperAccess and Clipart Library.

By |November 26th, 2020|News|Comments Off on Happy Thanksgiving!|

MCL Research on Knowledge Graph

Knowledge graphs (KG) model human readable knowledge using entity and relation triples. One major branch of KG research is representation learning, in which we try to learn low dimensional embeddings for entity and relations. Simple arithmetic operations between embeddings of entities and relations can represent complex real world knowledge or even discover new ones. KGs are rapidly evolving with the enormous amount of new information generated everyday. Since it is infeasible to retrain KG embeddings whenever we encounter a new entity or relation, modeling unseen entities and relations remains a challenging task.

There are two main directions of research to handle unseen entities. One direction is to infer the embedding of new entities from its neighboring entities and relations that are observed during training. Researchers have either relied on Graph Neural Networks or designed specialized aggregation functions to collect the unseen nodes’ neighborhood information. The other path is to leverage feature information in entity nodes metadata. Specifically, entity name and descriptions are often available in textual format upon querying the KG. Recent advances in transformer language models have made it possible to extract high quality feature representation for contextual information after a minimal amount of fine tuning of the model. When transformer language models such as BERT are applied to extract entity representations, the model is capable of generating embedding for any entity with textual name or descriptions. As a result, the unseen entity problem is therefore resolved.

RotatE has been one of the most effective yet simple KG embedding models invented recently. In RotatE, entities and relations are models as complex vectors. Each element of the relation vector serves as an element-wise phase shifter that transforms source entity to target entity. We propose a specialized [...]

By |November 22nd, 2020|News|Comments Off on MCL Research on Knowledge Graph|