News

MCL Research on Knowledge Graph Entity Typing Prediction

Knowledge graph entity typing (KGET) is a task to predict the missing entity types in the knowledge graphs (KG). Previously, KG embedding (KGE) methods tried to solve the KGET task by introducing an auxiliary relation “hasType” to model the relationship between entities and their corresponding types. However, a single auxiliary relation has limited expressiveness for the diverse patterns between entities and types.

In this work, we try to assign more auxiliary relations based on the “context” of the types to improve the expressiveness of KGE methods. The context of a type is defined as a collection of attributes of the corresponding entities. As such, the neighborhood information is implicitly encoded when the auxiliary relations are introduced. Similar types might share the same auxiliary relation to model their relationship with entities. Fig. 1 shows an example of using multiple auxiliary relations to model the typing relationship for different entity types. From the figure, it’s intuitive to use different auxiliary relations to model typing relationships for “administrative district” and “person” since these two types are largely different from each other. In addition, for “writer” and “soccer player”, different auxiliary relations should also be considered since they shouldn’t be embedded closely to each other in the embedding space. However, some types, such as “writer and “lecturer”, co-occur with each other often so they can adopt the same auxiliary relation to model the relationships with entities.

In addition, we propose an iterative training scheme, named KGE-iter, to train KGE models for the KGET task. Fig. 2 is an illustration of the proposed iterative training scheme. The entity embeddings are first initialized by training with only factual triples. Then, typing information is used to fine-tune the entity embeddings. Two training stages [...]

By |February 5th, 2023|News|Comments Off on MCL Research on Knowledge Graph Entity Typing Prediction|

MCL Research on SO(3)-Invariant Point Cloud Classification

Usually, the early learning-based point cloud classification methods were developed under the assumption that all point clouds in the dataset are well aligned with the canonical axes. In such scenarios, the 3D Cartesian point coordinates were to learn features. As a consequence, when input point clouds were not aligned, the classification performance dropped significantly. The same assumption holds true in the PointHop and PointHop++ methods proposed by MCL.

In our work SO(3)-Invariant PointHop (or S3I-PointHop in short), we analyze the reason for failure of PointHop due to pose variations, and solve the problem by replacing its pose dependent modules with rotation invariant counterparts. Furthermore, we significantly simplify the PointHop pipeline by using only one single hop along with multiple spatial aggregation techniques. We begin by aligning the point cloud to its three principal axes. This offers a coarse alignment and comes with several ambiguities such as due to eigen vector sign and object asymmetries. The feature extraction process consists of constructing local and global point features. The geometric features are derived from distances and angles in a local point cloud neighborhood. Similarly, the covariance features are found by performing eigen decomposition of the local covariance matrix. The geometric and covariance features form the set of local features. The global features comprise of omni-directional octant features of points in the 3D space similar to PointHop. Later, the Saab transform is conducted.

For aggregating the local and global point features into a global shape feature, conical and spherical aggregation is proposed. For conical aggregation, along each positive and negative principal axes, cones with tip at the origin and at unit distance along the axis are constructed. Then, only the features of points lying inside the respective cones are [...]

By |January 29th, 2023|News|Comments Off on MCL Research on SO(3)-Invariant Point Cloud Classification|

Professor Kuo Being Elevated to 2022 ACM Fellow

ACM named 57 of its members ACM Fellows for wide-ranging and fundamental contributions in a wide range of disciplines related to computing in a press release issued on January 18, 2023.

MCL Director, Professor C.-C. Jay Kuo, was among the 57 Fellows of class 2022 for his contributions to technologies, applications, and mentorship in visual computing. Professor Kuo has been an influential and long-term leader in video technologies and applications for 30+ years, enduringly impacting both the academic and industry realms. He has made significant contributions to visual computing through industrial collaboration, standardization activities, and training of next-generation leaders. His lab at USC has contributed to collaborative sponsored projects from 70+ companies. He and his students have made key contributions adopted by international image and video coding standards. He has been granted 30 US patents. His video technologies have impacted people’s daily life from capturing or watching video with smartphones to viewing high-quality streamed video on large screens.

The ACM Fellows program recognizes the top 1% of ACM Members for their outstanding accomplishments in computing and information technology and excellent service to ACM and the larger computing community. Fellows are nominated by their peers and reviewed by a distinguished selection committee. Professor Kuo will go to San Francisco to attend the Fellow induction ceremony on June 10, 2023.

By |January 22nd, 2023|News|Comments Off on Professor Kuo Being Elevated to 2022 ACM Fellow|

MCL Research on Low-light Video Enhancement

Videos captured under low light conditions are often noisy and of poor visibility. Low-light video enhancement aims to improve viewers’ experience by increasing brightness, suppressing noise, and amplifying detailed texture.  The performance of computer vision tasks such as object tracking and face recognition can be severely affected under low-light noisy environments.  Hence, low-light video enhancement is needed to ensure the robustness of computer vision systems. Besides, the technology is highly demanded in consumer electronics such as video capturing by smart phones.

A self-supervised adaptive low-light video enhancement (SALVE) method is proposed in this work. SALVE first conducts an effective Retinex-based low-light image enhancement on a few key frames of an input low-light video. Next, it learns mappings from the low- to enhanced-light frames via Ridge regression.  Finally, it uses these mappings to enhance the remaining frames in the input video. SALVE is a hybrid method that combines components from a traditional Retinex-based image enhancement method and a learning-based method. The former component leads to a robust solution which is easily adaptive to new real-world environments. The latter component offers a fast, computationally inexpensive and temporally consistent solution. We conduct extensive experiments to show the superior performance of SALVE. Our user study shows that 87% of participants prefer SALVE over prior work.

First figure shows an overview of the proposed SALVE method.  For intra-coded frames (I frames), it estimates an illumination component and a reflectance component using the NATLE method.  For inter-coded frames (P/B frames), it predicts these components using a ridge regression learned from the last raw and enhanced I frame pairs.

Second figure shows a quantitative comparison table between our low-light video enhancement method and prior work. To further demonstrate the effectiveness of our method, we [...]

By |January 15th, 2023|News|Comments Off on MCL Research on Low-light Video Enhancement|
  • Permalink Gallery

    Professor Kuo Visited Taiwan and Had Reunions with Local MCL Alumni

Professor Kuo Visited Taiwan and Had Reunions with Local MCL Alumni

Professor Kuo visited Taiwan from December 23, 2022, to January 11, 2023. This is his first visit after the 3-year pandemic period. During the visit, his main responsibility was giving a presentation on “low-light image/video enhancement” to Mediatek. Mediatek sponsored this project for the entire year of 2022. It has been successfully conducted by an MCL member, Zohreh Azizi. Mediatek was impressed by the low complexity and high performance of Zohreh’s proposed solution – the “Self-supervised Adaptive Low-light Video Enhancement (SALVE) “method.

Besides Mediatek, Professor Kuo visited a few universities and organizations in Taiwan, including the National Taiwan Normal University, National Sun-Yat-Sen University, National Cheng-Kung University, National Yang Ming Chiao Tung University, National Taiwan University, Academia Sinica, and Institute of Information Industry. Through seminars, he promoted the green learning technology developed at the MCL in the 7 years.

Furthermore, Professor Kuo met quite a few MCL alumni in three cities: Kao-Hsiung (December 25), Hsinchu (January 6), and Taipei (January 7). Most of them graduated from USC/MCL more than one decade ago. Professor Kuo said, “It was wonderful to meet MCL alumni in Taiwan. Glad to know all people are doing very well. It has been my biggest satisfaction to work with so many talented students at USC and see them grow into maturity as researchers/scholars.”

By |January 8th, 2023|News|Comments Off on Professor Kuo Visited Taiwan and Had Reunions with Local MCL Alumni|

Happy New Year!

At the beginning of 2023, We wish all MCL members a more wonderful year with everlasting passion and courage!

 

Image credit:

VectorStock.com/43400056

Alamy Stock Photo

By |January 1st, 2023|News|Comments Off on Happy New Year!|

Merry Christmas

2022 has been a fruitful year for MCL. Some members graduated with impressive research work and began a new chapter of life. Some new students joined the MCL family and explored the joy of research. MCL members have made great efforts on their research and published quality research papers on top journals and conferences. We appreciate all efforts to all possibilities! Wish all MCL members a merry Christmas!

 

Image credits:

Google images

Freepik

By |December 25th, 2022|News|Comments Off on Merry Christmas|

Congratulations to Min Zhang for Passing Her Defense

Congratulations to Min Zhang for passing her defense on Dec 9, 2022. Her PhD dissertation is titled with “Explainable and Green Solutions to Point Cloud Classification and Segmentation”. Her dissertation Committee members include Prof. C.-C. Jay Kuo (Chair), Keith Jenkins, and Prof. Stefanos Nikolaidis (Outside member). Min’s presentation was highly praised by the Committee. We invite Min Zhang here to share an abstract of her thesis and her defense experience. We wish Min Zhang all the best for her future career and life!

Point cloud processing is a fundamental but challenging research topic in the field of 3D computer vision, we specifically study two point cloud processing related problems — point cloud classification and point cloud segmentation. Given a point cloud as the input, the goal of classification is to label every point cloud as one of the object categories and the goal of segmentation is to label every point as one of the semantic categories. State-of-the-art point cloud classification and segmentation methods are based on deep neural networks. Although deep-learning-based methods provide good performance, their working principle is not transparent. Furthermore, they demand huge computational resources (e.g., long training time even with GPUs). Since it is challenging to deploy them in mobile or terminal devices, their applicability to real world problems is hindered. To address these shortcomings, we design explainable and green solutions to point cloud classification and segmentation.

We first propose an explainable machine learning method, PointHop, for point cloud classification and further improve its model complexity and performance in PointHop++. Then, we extend the PointHop method to do explainable and green point cloud segmentation. Specifically, an unsupervised feedforward feature (UFF) learning scheme for joint classification and part segmentation of 3D point clouds and [...]

By |December 18th, 2022|News|Comments Off on Congratulations to Min Zhang for Passing Her Defense|

Professor Kuo Delivered Keynote at PCS 2022 on Green Coding

The Picture Coding Symposium (PCS) is an international forum devoted to advances in visual data coding. Established in 1969, it has the longest history of any conference in this area. The 36th event in the series, PCS 2022, was held from December 7-9 in San Jose, California, USA, the heart of Silicon Valley and the cultural and technological epicenter of Northern California.  The conference venue was the San Jose Hilton hotel.

MCL Director, Professor C.-C. Jay Kuo, was invited to deliver a keynote speech on green coding on Dec. 7. The abstract of his keynote was “Green Coding: Low-Complexity Learning-based Image/Video Coding.” The abstract of his talk was:

“Deep-learning-based coding (or deep coding in short) has attracted much attention in recent years due to its superior rate-distortion (RD) performance. Yet, its huge computational complexity and model sizes are of concern in practical applications.  An alternative learning-based coding, called green coding, has been intensively studied in my lab for the last two and half years. Green coding targets a model size that is significantly smaller than that of deep coding. Furthermore, it has much lower decoding complexity than today’s advanced codecs, such as HEVC and VVC. It is particularly attractive for mobile devices. Green coding uses multi-grids to capture short-, mid-, and long-range correlations in images and adopts vector quantization (VQ) to leverage correlations between images. Extensive experiments are conducted to demonstrate the high RD performance and low complexity of green image coding. Its generalization to green video coding will also be discussed.”

Besides, Professor Kuo visited San Clara University on December 6. Hosted by Professor Nam Ling, he and Professor Chia-Wen Lin of National Tsinghua University gave two lectures, which were events of the US local chapter [...]

By |December 11th, 2022|News|Comments Off on Professor Kuo Delivered Keynote at PCS 2022 on Green Coding|

MCL Research on Generated Samples Quality Assessment

Despite prolific work on evaluating generative models, little research has been done on the quality evaluation of an individual generated sample. To address this problem, a lightweight generated sample quality evaluation (LGSQE) method is proposed in this work. In the training stage of LGSQE, a binary classifier is trained on real and synthetic samples, where real and synthetic data are labeled by 0 and 1, respectively. In the inference stage, the classifier assign soft labels (ranging from 0 to 1) to each generated sample. The value of soft label indicates the quality level; namely,the quality is better if its soft label is closer to 0. LGSQE can serve as a post-processing module for quality control. Furthermore, LGSQE can be used to evaluate the performance of generative models, such as accuracy, AUC, precision and recall, by aggregating sample-level quality. Experiments are conducted on CIFAR-10 and MNIST to demonstrate that LGSQE can preserve the same performance rank order as that predicted by the Frechet Inception Distance (FID) but with significantly lower complexity.

Fig. 1 shows the pipeline of the proposed method. The LGSQE method consists of three cascaded modules:

Module 1: Representation Learning. effective local and global representations of images are learned based upon PixelHop framework.

Module 2: Discriminant Feature Test (DFT). Use DFT to choose powerful features from large numbers of representations obtained from Module 1 against a particular task.

Module 3: Binary Classification for Evaluation. We partition the real/generated data into training and testing sets. A binary classifier is trained on the union of real and generated training samples, which are labeled with “0” and “1”, respectively. The classifier assigns a soft score, to each testing sample as the sample quality index.

Fig. 2 shows the evaluation of generated [...]

By |December 4th, 2022|News|Comments Off on MCL Research on Generated Samples Quality Assessment|