News

MCL Technology Outlook: Green Learning

Sustainability has become a main theme of science and technology in recent years. As civilization continues to develop, humans need be conscious in keeping the environment clean for future generations. As scientists and engineers of the 21st century, it is our destiny to keep green technologies as one of the top priorities. In the area of artificial intelligence and machine learning, it is urgent to explore a novel green machine learning technology, which is competitive with deep learning in performance yet with significantly lower power consumption in training and inference.

Green learning will be the central focus of the USC Media Communications Lab (MCL) in the next decade. Professor Kuo, Director of MCL, has been devoted to this subject since 2015. A sequence of papers on green learning systems has been published. Examples include: PixelHop, PointHop, FaceHop, GraphHop, GenHop, etc. These solutions have common characteristics, including low power consumption, small model sizes, weak supervision and scalability. The underlying principle of MCL’s green learning solutions is successive subspace learning (SSL).

MCL will continue to push the envelope of green learning and develop effective green solutions for natural language processing, knowledge understanding, computer vision, joint audio-visual processing, and 3D data processing.

By |December 27th, 2020|News|Comments Off on MCL Technology Outlook: Green Learning|

Merry Christmas and Happy New Year

2020 has been a fruitful year for MCL. Some members graduated with impressive research work and began a new chapter of life. Some new students joined the MCL family and explored the joy of research. MCL members have made great efforts on their research and published quality research papers on top journals and conferences.

Merry Christmas. Wish all MCL members a happy new year!

 

Image credits:

Image 1: https://freepik.com, resized; Image 2: https://www.homemade-gifts-made-easy.com/, resized.

By |December 20th, 2020|News|Comments Off on Merry Christmas and Happy New Year|
  • Permalink Gallery

    Congratulations to Professor Kuo for Being Elected as NAI Fellow

Congratulations to Professor Kuo for Being Elected as NAI Fellow

Congratulations to MCL Director, Professor C.-C. Jay Kuo, for being elected as a Fellow of the National Academy of Inventors (NAI). The announcement was made by the NAI President, Dr. Paul R. Sanberg, on December 8.

This year’s class includes three professors at the USC Viterbi School of Engineering: Gerald Loeb, professor of biomedical engineering and neurology; Keith Chugg, professor of electrical and computer engineering; and Jay Kuo, distinguished professor of electrical and computer engineering and computer science.

The 2020 NAI Fellow class has 175 academic innovators from across the world. It represents 115 research universities and governmental and non-profit research institutes worldwide. They collectively hold over 4,700 issued U.S. patents. Among the 2020 Fellows are recipients of the National Academies of Sciences, Engineering, and Medicine, American Academy of Arts & Sciences, and Nobel Prize, as well as other honors and distinctions. Their collective body of research covers a range of scientific disciplines including biomedical engineering, computer engineering, materials science, and physics.

With the election of the 2020 class, there are now 1,403 NAI Fellows worldwide, representing more than 250 prestigious universities and governmental and non-profit research institutes. To date, NAI Fellows hold more than 42,700 issued U.S. patents, which have generated over 13,000 licensed technologies and companies, and created more than 36 million jobs. In addition, over $2.2 trillion in revenue has been generated based on NAI Fellow discoveries.

By |December 13th, 2020|News|Comments Off on Congratulations to Professor Kuo for Being Elected as NAI Fellow|

Professor Kuo Delivered Tencent Keynote Speech at VCIP 2020

MCL Director, Professor C.-C. Jay Kuo, gave an opening keynote at the IEEE International Conference on Visual Communications and Image Processing (VCIP) on December 2, 2020. The meeting would originally be held from December 1-4, 2020, in Macau. However, due to the COVID-19 pandemic, it became a virtual one. The keynote is titled with “Interpretable and Effective Learning for 3D Point Cloud Registration, Classification and Segmentation.” Here is the abstract:

“3D point cloud analysis and processing find numerous applications in computer-aided design, 3D printing, autonomous driving, etc. Most state-of-the-art point cloud processing methods are based on convolutional neural networks (CNNs). Although they outperform traditional methods in terms of accuracy, they demand heavy supervision and higher training complexity. Besides, they lack mathematical transparency. In this talk, I will present three interpretable and effective machine learning methods for 3D point cloud registration, classification and segmentation, respectively. First, an unsupervised registration method that extracts salient points for matching is presented. Second, an unambiguous way to order points sequentially in a point cloud set is developed. Then, their spatial coordinates can be treated as geometric attributes of 1D data array. This idea facilitates the classification task. Third, for the segmentation task, we show how to leverage prior knowledge on point clouds to derive an intuitive and effective segmentation method. Extensive experiments are conducted to demonstrate the performance of the three new methods. I will also provide performance benchmarking between these interpretable methods and deep learning methods.”

The keynote was well attended with many questions during the 10-minute Q&A session. Professor Kuo’s keynote was sponsored by Tencent and called the Tencent Keynote Speech.

By |December 7th, 2020|News|Comments Off on Professor Kuo Delivered Tencent Keynote Speech at VCIP 2020|

Happy Thanksgiving!

At this time of Thanksgiving celebration, hope everyone stay safe during the pandemic and have a good time with their beloved families or friends. It’s also a good time to take a rest to think back on our fulfillments this year and be thankful to those who support us during this hard time. Thanks to every MCL member for the collaboration and hard work throughout this year to keep the research activities ongoing smoothly!

Happy Thanksgiving!

 

Images credit to WallpaperAccess and Clipart Library.

By |November 26th, 2020|News|Comments Off on Happy Thanksgiving!|

MCL Research on Knowledge Graph

Knowledge graphs (KG) model human readable knowledge using entity and relation triples. One major branch of KG research is representation learning, in which we try to learn low dimensional embeddings for entity and relations. Simple arithmetic operations between embeddings of entities and relations can represent complex real world knowledge or even discover new ones. KGs are rapidly evolving with the enormous amount of new information generated everyday. Since it is infeasible to retrain KG embeddings whenever we encounter a new entity or relation, modeling unseen entities and relations remains a challenging task.

There are two main directions of research to handle unseen entities. One direction is to infer the embedding of new entities from its neighboring entities and relations that are observed during training. Researchers have either relied on Graph Neural Networks or designed specialized aggregation functions to collect the unseen nodes’ neighborhood information. The other path is to leverage feature information in entity nodes metadata. Specifically, entity name and descriptions are often available in textual format upon querying the KG. Recent advances in transformer language models have made it possible to extract high quality feature representation for contextual information after a minimal amount of fine tuning of the model. When transformer language models such as BERT are applied to extract entity representations, the model is capable of generating embedding for any entity with textual name or descriptions. As a result, the unseen entity problem is therefore resolved.

RotatE has been one of the most effective yet simple KG embedding models invented recently. In RotatE, entities and relations are models as complex vectors. Each element of the relation vector serves as an element-wise phase shifter that transforms source entity to target entity. We propose a specialized [...]

By |November 22nd, 2020|News|Comments Off on MCL Research on Knowledge Graph|

MCL Research on Natural Image Synthesis

Automatic new image synthesis based on a collection of sample images from the same class finds broad applications in computer graphics and computer vision. Examples include automatic synthesis of human faces, hand-written digits, etc.  On an abstract level, a generative model learns to resemble the probability distribution of data samples and generate new samples based on the learned model.  Research on generative models has attracted rich attention in the machine learning community for decades.

 

Image synthesis is challenging for two main reasons. First, it demands a sufficiently large number of images to define meaningful statistics for a target class.  Second, to generate new images of similar characteristics, one should find one or more effective representations of samples and process them with a proper mechanism. There is a resurge of interests in generative models due to the performance breakthrough achieved by deep learning (DL) technologies in the last 6-7 years. There are however concerns with DL-based generative models. Built upon multi-layer end-to-end optimization, the DL technology is essentially a nonconvex optimization problem. Because of the mathematical complexity associated with nonconvex optimization, DL-based solutions are a black box. Besides, the training of DL-based generative models demands a large amount of computational resource. We propose an explainable and effective generative model to address these concerns, named Successive Subspace Generative (SSG) model.

 

Subspaces of descending dimensions are successively constructed in a feedforward manner, which is called the embedding process. Through embedding, the sample distribution of the source and subsequent subspaces can be captured by embedding parameters and the sample distribution in the core. For generation, samples are first generated according to the learned distribution in the core. Then, they go from the core to the source by traversing the same [...]

By |November 16th, 2020|News|Comments Off on MCL Research on Natural Image Synthesis|

MCL Research on Video Object Tracking

The visual tracking problem has a long history and has diverse applications in video surveillance, smart traffic system, autonomous driving cars and so on. Deep learning methods have gradually dominated the online single object tracking field because of the superior tracking accuracy. However, they usually require training on tremendous labeled videos which are expensive and time-consuming to acquire.

We proposed an explainable self-supervised salient-point-based approach to track general objects in real time, by utilizing attention and features from both the spatial domain and  the temporal domain. There are two major parts in our tracking system: tracking adjacent frames by matching salient points which represent spatial attention, and utilizing temporal information storing in salient points across different frames to identify loss of object or appearance change. In both parts, the salient point plays an important role in capturing spatial-temporal information. Here the feature of a salient point comes from the concatenation of two hop layer features in two-stage channel-wise Saab. The first hop contains PCA information of local patches at high resolution, while the second hop works at a lower resolution with larger receptive field, thus naturally forming a multi-resolution feature extractor which help capture unusual patterns that we should pay more attention to during tracking.

We have got some preliminary results on the current framework. We evaluate our method on the long-term tracking benchmark TB-50 [1] where the used metrics include success plots and precision plots in one pass evaluation (OPE) mode. This dataset includes 50 video sequences and 29491 frames in total. Mean success rate indicates the average overlapping ratio between the prediction and the ground truth, while mean precision rate shows how close their centers are. The higher the two values are, the better [...]

By |November 8th, 2020|News|Comments Off on MCL Research on Video Object Tracking|

MCL Research on Spatial Attention

Object detection and recognition is critical to image understanding, and there has been a long competition between supervised and unsupervised approaches in visual attention extraction. We are interested in an unsupervised approach and our method contains two main complimentary parts: Spectral Clustering Segmentation and Contour Detection.

Spectral Clustering has been a mature method for image segmentation, during which images are viewed as graph. For a standard spectral clustering pipeline, usually with each pixel as a vertex, a pixelwise affinity matrix is calculated from the  graph, then the Laplacian matrix of the affinity matrix, and with predefined number of clusters K, Kmeans clustering is conducted with the first K smallest eigenvectors of the Laplacian matrix to give the final segmentation results. In our current method, Pointhop features are adapted instead of the biological features like colors or textures to construct the graph for input image, which is the core contribution to the progress. For each input image, Pointhop features are extracted with channel-wise Saab, and K-neighbors Graph is constructed with the feature map, then combined with the following standard spectral clustering process. To evaluate Segments from Spectral Clustering on Pixelhop, Contour Detection is introduced as complementary middle level features. Here, structure edge [1] detection results are used for contour detection, for each segment from the spectral clustering, the largest closed contours within the segment are evaluated by heuristic rules to check whether a reasonable object or not.

During this process, most objects proposed are parts of a main object, e.g. eyes, face, hand, harms of a human, then during the post process adjacent objects proposed are merged to construct bigger objects, and a full Rectangle Tree of Objects can be constructed for each input image.

 

By Hongyu Fu

By |November 3rd, 2020|News|Comments Off on MCL Research on Spatial Attention|

MCL Research on Image Super-resolution

Image super-resolution (SR) is a classic image reconstruction problem in computer vision (CV), which aims at recovering a high-resolution image from a low-resolution image. As a type of supervised generative problem, image SR attracts wide attention due to its strong connection with other CV topics, such as object recognition, object alignment, texture synthesis and so on. Besides, it has extensive applications in real world, for example, medical diagnosis, remote sensing, biometric information identification, etc.

For the state-of-the-art approaches for SR, typically there are two mainstreams: 1) example-based learning methods, and 2) Deep Learning (CNN-based) methods. Example-based methods either exploit external low-high resolution exemplar pairs [1], or learn internal similarity of the same image with different resolution scales [2]. However, features used in example-based methods are usually traditional gradient-related or just handcraft, which may affect model performance. While CNN-based SR methods (e.g. SRCNN [3]) does not really distinguish between feature extraction and decision making. Lots of basic CNN models/blocks are applied to SR problem, e.g. GAN, residual learning, attention network, and provide superior SR results. Nevertheless, the non-explainable process and exhaustive training cost are serious drawbacks of CNN-based methods.

By taking advantage of reasonable feature extraction [4], we utilize spatial-spectral compatible cw-Saab features to express exemplar pairs. In addition, we formulate a Successive-Subspace-Learning-based (SSL-based) method to gradually partition data into subspaces by feature statistics, and apply regression in each subspace for better local approximation. By visualization the samples in representative subspaces, we find obvious sample similarity in pixel domain. This demonstrates the efficiency of our method in splitting samples into subspaces with semantic meaning. In the future, we aim at providing such a SSL-based explainable method with high efficiency for SR problem.

—  By Wei Wang

 

Reference:

[1] Timofte, Radu, [...]

By |October 25th, 2020|News|Comments Off on MCL Research on Image Super-resolution|