News

Welcome New MCL Member Qixin Hu

We are so happy to welcome a new MCL member, Qixin Hu joining MCL this semester. Here is a quick interview with Qixin:1.Could you briefly introduce yourself and your research interests?

My name is Qixin Hu, I am a first-year Ph.D. student at USC major in Electrical Engineering. I’m very excited to join MCL as a Ph.D. student. My research interests mainly focus on green learning, image generation and foundation models.

2. What is your impression about MCL and USC?

I love the life here at USC, all kinds of activities really help me fit in the trojan family. The campus is very beautiful, and I enjoy studying here. The people in MCL are amazing; all the group members are all very kind and intelligent, I do learn a lot from the senior member of MCL as well as Prof. Kuo.   

3.What is your future expectation and plan in MCL?

First, I want to fit myself into MCL, making good friends with group members. Second, I want to successfully complete my Ph.D. studies. Third, and most importantly, I want to produce some insightful work for the research community.

By |September 29th, 2024|News|Comments Off on Welcome New MCL Member Qixin Hu|

Welcome New MCL Member Youngrae Kim

We are so happy to welcome a new MCL member, Youngrae Kim joining MCL this semester. Here is a quick interview with Youngrae: 1. Could you briefly introduce yourself and your research interests?

My name is Youngrae Kim, and I am a first-year PhD student in Computer Engineering at USC. Before starting my PhD, I completed my undergraduate studies at Hongik University and earned my master’s degree at the Korea Advanced Institute of Science and Technology (KAIST). My research interests lie in computer vision, machine learning, and green learning. Outside of academics, I enjoy hiking, working out, and traveling.

What is your impression of MCL and USC?

For me, MCL is an impressive research institute where there’s a strong sense of trust, not only in academic collaborations but also on a personal level. The community is welcoming, and I’ve felt a great sense of support from everyone here. I’m excited to be advised by Professor Kuo, one of the leading researchers in computer vision. The atmosphere at USC is also fantastic—people seem genuinely happy, enjoying life while working hard. I’m very glad to have joined USC, particularly MCL.

What are your future expectations and plans in MCL?

During my PhD, I plan to focus on Federated Learning for the medical domain. I expect to encounter various challenges in applying this technology to real-world scenarios, and I aim to address these through my research, ultimately proposing a comprehensive thesis that has practical applications. I also look forward to collaborating with peers in different areas, which I believe will broaden my academic perspective.

By |September 22nd, 2024|News|Comments Off on Welcome New MCL Member Youngrae Kim|
  • Permalink Gallery

    Congratulations to Vasileios Magoulianitis for Passing His Defense

Congratulations to Vasileios Magoulianitis for Passing His Defense

Congratulations to Vasileios Magoulianitis for passing his defense today. Vasileios’ thesis is titled “Transparent and Lightweight Medical Image Analysis Techniques: Algorithms and Application.” His Dissertation Committee includes Jay Kuo (Chair), Justin Haldar, and Qifa Zhou (Outside Member). The Committee members were pleased with the breadth and depth of Vasileios’ thesis. The MCL News team invited Vasileios for a short talk on his thesis and PhD experience. Here is the summary. We thank Vasileios for his kind sharing and wish him all the best on his next journey. A high-level abstract of Vasileios’s thesis is given below:

Thesis Title: Transparent and Lightweight Medical Image Analysis Techniques: Algorithms and Applications

The thesis contains two main research topics:Nuclei segmentation in histopathological images and Prostate Cancer (PCa) from Magnetic Resonance Imaging (MRI). On the one hand, histopathological images are meant to detect and grade cancer. Toward this end, nuclei segmentation is a cornerstone task to reveal the molecular profile of the tissue. Three self-supervised solutions have been introduced: (1) CBM, which uses a parameter-free pipeline using thresholding, (2) HUNIS where a novel adaptive thresholding and false positive reduction module are proposed and (3) Local-to-Global NuSegHop where a novel feature extraction method is proposed. On the other hand, PCa-RadHop pipeline is proposed for prostate cancer detection from MRI, achieving a competitive performance with a model size orders of magnitude smaller than other Deep Learning based models.

PhD Experience Sharing:The PhD journey within USC and MCL has been an experience I will remember for a life. The first years in the PhD I had to take many courses to build my theoretical insights and achieve the first milestone to pass the screening exam which required a very rigorous preparation. In my entire PhD life, I had two [...]

By |September 15th, 2024|News|Comments Off on Congratulations to Vasileios Magoulianitis for Passing His Defense|

Congratulations to Zhanxuan Mei for Passing His Defense

Congratulations to Zhanxuan Mei for passing his defense. Zhanxuan’s thesis is titled “Explainable and Lightweight Techniques for Blind Visual Quality Assessment and Saliency Detection.” His Dissertation Committee includes Jay Kuo (Chair), Antonio Ortega, and Ulrich Neumann (Outside Member). The Committee members praised the quality of Zhanxuan’s work very much. The MCL News team invited Zhanxuan for a short talk on his thesis and PhD experience. Here is the summary. We thank Zhanxuan for his kind sharing and wish him all the best on his next journey. A high-level abstract of Zhanxuan’s thesis is given below:

Thesis:

Explainable and Lightweight Techniques for Blind Visual Quality Assessment and Saliency Detection

The thesis contains four main research topics:

We begin by presenting our proposed GreenBIQA method, a novel approach to BIQA characterized by its compact model size, low computational complexity, and high performance. Building on the foundation of GreenBIQA, we extend its application to BVQA through the development of GreenBVQA. To further enhance the performance of GreenBIQA, we introduce a lightweight and efficient image saliency detection method, termed GreenSaliency. Ultimately, we integrate GreenSaliency with GreenBIQA, culminating in the development of the Green Saliency-guided BIQA method (GSBIQA).

PhD Experience Sharing:

The PhD journey at MCL has been an unforgettable experience. Over the course of this long and challenging path, I navigated through the unprecedented COVID era, multiple rounds of rigorous exams, and a diverse range of responsibilities, including teaching assistantships and intensive research projects. Each challenge presented an opportunity to grow, expand my perspectives, and enhance my skill set. I have honed my communication skills, developed the ability to tackle real-world problems through collaborative projects, and cultivated teamwork skills by engaging in discussions and joint efforts with exceptionally talented peers. These valuable experiences and abilities [...]

By |September 8th, 2024|News|Comments Off on Congratulations to Zhanxuan Mei for Passing His Defense|

MCL Research on Supervised Feature Learning

Supervised feature learning is a critical component in machine learning, particularly within the green learning (GL) paradigm, which seeks to create lightweight, efficient models for edge intelligence.

Supervised feature learning involves creating new, more discriminant features from an existing set of features, typically produced during an earlier stage of unsupervised representation learning. The objective is to enhance the discriminative power of the features, thereby improving the model’s accuracy and robustness in decision-making tasks such as classification.

The process typically begins with a rich set of representations obtained from the unsupervised module. These representations are then rank-ordered according to their discriminant power using a discriminant feature test (DFT). However, if the initial set of features lacks sufficient discriminant power, new features can be derived through linear combinations of the existing ones. This method transforms a multi-class classification problem into multiple binary classification problems, then applies linear regression to generate new features that are more discriminant than the original ones. These new features are shown to improve the performance of the classifier, demonstrating the effectiveness of the supervised feature learning process within the GL framework.We proposed the Least-Squares Normal Transform (LNT) for generating new discriminant features. This method transforms a multi-class classification problem into multiple binary classification problems, then applies linear regression to generate new features that are more discriminant than the original ones. These new features are shown to improve the performance of the classifier, demonstrating the effectiveness of the supervised feature learning process within the GL framework.

References:

X. Wang, V. K. Mishra, and C.-C. J. Kuo, “Enhancing edge intelligence with highly discriminant lnt features,” in 2023 IEEE International Conference on Big Data (BigData). IEEE, 2023, pp. 3880–3887

By |September 1st, 2024|News|Comments Off on MCL Research on Supervised Feature Learning|
  • Permalink Gallery

    MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)

MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)

Objective image quality assessment (IQA) plays a crucial role in various multimedia applications and is generally categorized into three distinct types: Full-Reference IQA (FR-IQA), Reduced-Reference IQA (RR-IQA), and No-Reference IQA (NR-IQA). FR-IQA involves a direct comparison between a distorted image and its original reference image to evaluate quality. RR-IQA, in contrast, relies on partial information from reference images to assess the quality of the target images. NR-IQA, also known as blind image quality assessment (BIQA), is indispensable in situations where reference images are unavailable, such as at the receiver’s end or for user-generated content on social media platforms. The increasing prevalence of such platforms has driven a significant rise in the demand for BIQA. BIQA is critical for estimating the perceptual quality of images without reference, making it particularly relevant in the context of user-generated content and mobile applications, where reference images are typically not accessible.

The challenge of BIQA lies in its need to handle a wide variety of content and the presence of multiple types of distortions. Although many BIQA methods leverage deep neural networks (DNNs) and incorporate saliency detectors to improve performance, their large model sizes pose significant limitations for deployment on resource-constrained devices.

To overcome these challenges, we propose a novel non-deep-learning BIQA method, termed Green Saliency-guided Blind Image Quality Assessment (GSBIQA). GSBIQA is distinguished by its compact model size, low computational requirements, and strong performance. The method integrates a lightweight saliency detection module that aids in data cropping and decision ensemble processes, generating features that effectively mimic the human attention mechanism. The GSBIQA framework is composed of five key processes: 1) green saliency detection, 2) saliency-guided data cropping, 3) GreenBIQA feature extraction, 4) local patch prediction, and 5) saliency-guided decision ensemble. [...]

By |August 24th, 2024|News|Comments Off on MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)|

MCL Research on Green Raw Image Demosaicking

Digital cameras typically use a color filter array (CFA) over the image sensor to capture color images, with the Bayer array being the most common CFA. This array captures only one color per pixel, resulting in raw data that lacks two-thirds of the necessary color information. Demosaicking is the process used to reconstruct the complete color image from this partial data. Simple interpolation methods like bilinear and bicubic often produce suboptimal results, especially in complex areas with textures and edges.

To improve image quality, adaptive directional interpolation methods align the interpolation with image edges, using techniques like gradients or Laplacian operators to detect horizontal and vertical edges. Recently, deep learning techniques have set new performance benchmarks in demosaicking, but their complexity and resource demands pose challenges for deployment on edge devices with limited processing power and storage.

To address these issues, a lightweight “green learning” approach is proposed for demosaicking on edge devices. Unlike traditional deep learning models, green learning does not rely on neural networks. Our proposed model can be explained in three stages. See Fig [1] for details. beginning with data processing, where the interpolation method is used to estimate RGB values. Channels are then categorized into subchannels based on their positions in the CFA array, improving prediction accuracy in the learning stage. fig[2] for details. In the feature processing stage of the green learning approach for demosaicking, three subsequential modules work together to enhance the model’s performance. First, unsupervised representation learning techniques, such as the Saab transform or Successive Subspace Learning (SSL), are used to efficiently extract meaningful representations from raw data. Next, supervised feature selection is performed using the Discriminant Feature Test (DFT) and the Relevant Feature Test (RFT) to identify and [...]

By |August 18th, 2024|News|Comments Off on MCL Research on Green Raw Image Demosaicking|
  • Permalink Gallery

    MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)

MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)

Objective image quality assessment (IQA) is pivotal in various multimedia applications. It can be categorized into three distinct types: Full-Reference IQA (FR-IQA), Reduced-Reference IQA (RR-IQA), and No-Reference IQA (NR-IQA). FR-IQA directly compares a distorted image against a reference or original image to assess quality. RR-IQA, on the other hand, uses partial information from the reference images to evaluate the quality of the target images. NR-IQA, also known as blind image quality assessment (BIQA), becomes essential in scenarios where reference images are unavailable, such as at the receiver’s end or for user-generated content on social media. The demand for BIQA has surged with the increasing popularity of such platforms. BIQA is an essential task that estimates the perceptual quality of images without reference. This field is increasingly relevant due to the rise in user-generated content and mobile applications where reference images are typically unavailable.

The challenge in BIQA lies in the diversity of content and the presence of mixed distortion types. While many BIQA methods employ deep neural networks (DNNs) and incorporate saliency detectors to enhance performance, their large model sizes limit deployment on resource-constrained devices.

To address this challenge, we introduce a novel and non-deep-learning BIQA method with a lightweight saliency detection module, called Green Saliency-guided Blind Image Quality Assessment (GSBIQA). It is characterized by its minimal model size, reduced computational demands, and robust performance. The lightweight saliency detector in GSBIQA facilitates data cropping and decision ensemble and generates useful features in BIQA that emulate the attention mechanism. The GSBIQA method is structured around five key processes: 1) green saliency detection, 2) saliency-guided data cropping, 3) Green BIQA feature extraction, 4) local patch prediction, and 5) saliency-guided decision ensemble. Experimental results show that the performance of [...]

By |August 11th, 2024|News|Comments Off on MCL Research on Green Saliency-guided Blind Image Quality Assessment (GSBIQA)|

MCL Research on Prostate Lesion Detection from MRI Images

Research in healthcare systems has been mainly focused on automating several tasks in the clinical pipeline, aiming at enhancing or expediting physician’s diagnosis. Prostate Cancer (PCa) is widely known as one of the most frequently occurring cancer in diagnosis in men. If early diagnosed, the mortality rate is almost zero. Yet, should it goes under the radar -in the metastasis stage- that rate plummets at 31% [1]. For diagnosis, after a high level of Prostatic Specific Antigen (PSA), patients are recommended to undergo an MRI screening. Those patients with suspiciously looking lesions on the prostate gland will eventually undergo biopsy. The histology gives the definite answer whether a patient suffers from cancer. Nevertheless, it is observed in real practice that urologists tend to over-diagnose patients with csPCa, thus increasing the number of unnecessary biopsies. As such, this increases the diagnostic costs and patient discomfort.

Computer vision empowered with AI has shown promising results in the last decade. Computer-Aided Diagnosis (CAD) tools benefit from AI’s rapid evolution and many works have been proposed to automatically perform lesion detection and segmentation. Even though the Deep Learning (DL) paradigm is ubiquitous in modern AI, medical applications require more transparency behind feature extraction and thereby DL is often deemed as a “black-box” from physicians. Our proposed pipeline, PCa-RadHop [2], employs a novel and linear module for data-driven feature extraction and hence the decision making becomes more interpretable. PCa-RadHop receives three different modalities as input from the MRI-scanner (i.e. T2w, ADC, DWI), pertinent to PCa diagnosis. It consists of two stages. The first stage calculates a probability map about csPCa presence in a voxel-wise manner, while the second stage is meant to reduce the false positive rate on that heatmap [...]

By |August 4th, 2024|News|Comments Off on MCL Research on Prostate Lesion Detection from MRI Images|

MCL Research on Green Image Super-resolution

Single image super-resolution (SISR) is an intensively studied topic in image processing. It aims at recovering a high-resolution (HR) image from its low-resolution (LR) counterpart. SISR finds wide real-world applications such as remote sensing, medical imaging, and biometric identification. Besides, it attracts attention due to its connection with other tasks (e.g., image registration, compression, and synthesis). To deal with such ill-posed problem, we recently proposed two methods, LSR[1] and LSR++[2], by providing reasonable performance and effectively reduced complexity.

LSR consists of three cascaded modules:

Unsupervised Representation Learning by creating a pool of rich and diversified representations in the neighborhood of a target pixel.

Supervised Feature Learning by Relative Feature Test (RFT [3]) to select a subset from the representation pool that is most relevant to the underlying super-resolution task automatically, and

Supervised Decision Learning by predicting the residual of the target pixel based on the selected features through regression via classical machine learning, and effectively fusioning the predictions for more stable results.

LSR++ is promoted based on LSR, with emphasis on sample alignment, a more promising sample preparation process which is suitable for all patch-based computer vision problems. As illustrated in Fig 1, based on gradient histograms of patches along the eight reference directions (Fig.1.a), patch alignment utilizes patch rotations and flipping to meet the standard templates of gradient histograms, where D_max is the direction with the largest cumulative gradient magnitude, and D_max_orth_b and D_max_orth_s refer to the orthogonal directions to D_max with big and small cumulative gradient magnitude, respectively. By modifying the set of (D_max, D_max_orth_b, and D_max_orth_s) of a patch, patch alignment can regularize the edge pattern with the patch by directions perpendicular the edge (D_max) and directions along the edge (D_max_orth_b, D_max_orth_s). The process of patch [...]

By |July 28th, 2024|News|Comments Off on MCL Research on Green Image Super-resolution|