MCL Research on Least-Squares Normal Transform
In the ever-evolving realm of artificial intelligence (AI) and machine learning (ML), deep learning (DL) has reigned supreme for the past decade. However, its enigmatic nature and computational intricacies have prompted a quest for alternatives. Enter green learning (GL), a novel approach committed to constructing AI systems that are not just powerful but also interpretable, reliable, and sustainable.
GL is structured around three key modules: unsupervised representation learning, supervised feature learning, and supervised decision learning. Our primary focus is on the second module, which tackles the shortcomings of DL.
In the initial stages of GL, a diverse set of representations is crafted without any guiding supervision. These representations then undergo a discriminant feature test (DFT) in the subsequent module, where they are ranked based on their ability to discriminate. The selected discriminant representations become features. While the unsupervised nature of the first module might make GL representations seem less competitive than their DL counterparts, a remedy is proposed.
To address this, a novel approach emerges—creating new features through linear combinations of selected features. This method shows promise in obtaining more discriminant features, introducing a challenge of finding optimal weights for these combinations. Previous search algorithms like probabilistic search, adaptive particle swarm optimization (APSO) search, and stochastic gradient descent (SGD) search have been explored, but they still come with computational expenses.
Enter the least-squares normal transform (LNT), a game-changer. LNT is a supervised method designed to efficiently generate discriminant complementary features. These new features, termed complementary features, complement the original input features, referred to as raw features. The significance of this work lies in two key contributions: the introduction of LNT as an efficient tool for generating discriminant complementary features and its practical application, showcasing its prowess in improving the classification performance of image-related problems.
Share This Story, Choose Your Platform!
About the Author: Mahtab Movahhedrad
Mahtab Movahhedrad received her B.S. and M.S. degree in Electrical Engineering from the University of Tabriz and Tehran polytechnics, Iran, respectively. She is currently a Ph.D. student in the Department of Electrical Engineering, University of Southern California, advised by Professor Kuo. She joined Media Communications Lab in Fall 2021. Her research interests include image processing, computer vision, and Machine learning.