News

MCL Research on Nuclei Segmentation

Nuclei Segmentation is a key step in understanding the distribution, size, and shape of nuclei in the underlying tissue. Traditionally, pathologists view histology slides under the microscope to analyze the nuclear structure. However, this process is time-consuming and is prone to inter-reader variability. An AI-based segmentation algorithm can aid pathologists in cancer detection and prognosis, and help speed up the cancer screening procedure. 

While there are several deep learning methods addressing this problem, we propose a Green Nuclei Segmentation algorithm that uses a simple, reliable, and modular approach to delineate nuclei in a histopathology slide. The Green U-shaped Learning(GUSL) is a 4 level pipeline that involves three main modules: representation learning using PixelHop, feature selection using RFT, and supervised learning using XGBoost Regressor. The different levels help look at the histopathology image at multiple resolutions, while we attempt to segment the nuclei in a coarse to fine manner. At each level, we aim to correct the previous layer’s predictions through residue correction. While this model gives good results, we can further improve the performance by refining the boundary regions to yield precise nuclei contours. 

By |April 20th, 2025|News|Comments Off on MCL Research on Nuclei Segmentation|

MCL Research on Seismic Data Processing

Seismic waves are mechanical waves generated by earthquakes that travel through the Earth. Body waves consist of fast, compressional primary (P) waves and slower, shear secondary (S) waves. With large datasets of seismogram recordings, researchers train machine learning models to automatically pinpoint P‑ and S‑wave arrival times. This is essential for real‑time seismic monitoring and early warning systems.

Our Green Learning framework streamlines this process while boosting interpretability. We begin by slicing raw seismic recordings into overlapping three‑channel windows and assigning each a continuous pseudo‑label (ranging from 0 to 1) that reflects how accurately it is aligned to a P‑ or S‑wave onset. Treating these windows as 3‑channel images, we extract multi‑scale features via multiple Saab transform layers and select the most powerful features at each scale using Relevant Feature Test (RFT) modules. An XGBoost regressor then produces a continuous output signal, from which P‑ and S‑wave arrivals are simply recovered by peak detection. Compared to the SotA deep learning model EQTransformer, this model uses far fewer parameters,

By |April 13th, 2025|News|Comments Off on MCL Research on Seismic Data Processing|

MCL Research on Image Denoising

Image denoising is a computer vision technique that removes noise from images while preserving essential structures and textures. It plays a critical role in applications such as photography enhancement, medical imaging, and remote sensing.

To address such problems, we have employed GUSL, a Green Learning-based pipeline tailored for image denoising. Noisy images are resized to multiple resolutions, and Green Learning techniques such as PixelHop, RFT, and LNT are applied at each level to extract features independently. Each level progressively refines the denoising result by correcting the residuals from the previous level. While this approach yields promising results, further refinement is needed to enhance performance in smooth and texture-rich regions.

By |April 6th, 2025|News|Comments Off on MCL Research on Image Denoising|

MCL Research on Video-Text Retrieval

Image-text retrieval is a fundamental task in image understanding. This task aims to retrieve the most relevant information from another modality based on the given image or text. Recent approaches focus on training large neural networks to bridge the gap between visual and textual domains. However, these models are computationally expensive and not explainable regarding how the data from different modalities are aligned. End-to-end optimized models, such as large neural networks, can only output the final results, making it difficult for humans to understand the reasoning behind the model’s predictions.

Hence, we propose a green learning solution, Green Multi-Modal Alignment (GMA), for computational efficiency and mathematical transparency. We reduce trainable parameters to 3% compared to fine-tuning the whole image and text encoders. The model is composed of three modules, including (1) Clustering, (2) Feature Selection, and (3) Alignment. The clustering process divides the whole dataset into subsets by choosing similar image and text pairs, reducing the training sample’s divergence. The second module, feature selection, reduces the feature dimension and mitigates the computational requirements. The importance of each feature can be interpreted as statistical evidence supporting our reasoning. The alignment is conducted by linear projection, which guarantees the inverse projection in both direction retrievals, namely image-to-text and tex-to-image retrievals.

Experimental results show that our model can outperform the SOTA retrieval models in text-to-image and image-to-text retrieval on the Flick30k and MS-COCO datasets. Besides, our alignment process can incorporate visual and text encoder models trained separately and generalize well to unseen image-text pairs.

By |March 30th, 2025|News|Comments Off on MCL Research on Video-Text Retrieval|

MCL Research on Enhanced Object Detection

Enhancing image feature extraction to boost image classification accuracy has been a significant research focus at the MCL lab. Initially, PixelHop++ was developed to efficiently extract image features and perform accurate image classification. Subsequently, the Least-Squares Normal Transform (LNT) was introduced to further enhance image features, improving classification results with PixelHop++ on standard image databases such as MNIST and FMNIST. Despite achieving commendable performance, further refinements remain desirable to push accuracy limits even higher.

To address this, we propose a novel pipeline of four distinct experimental setups involving different pooling strategies—absolute maximum pooling and variance pooling—at hops 1 and 2. We extract LNT features specifically from hops 1, 3, and 4 for each experiment. At hop-1, pooling (either max or variance) generates 10 LNT features per channel, resulting in a total of 250 features. Hop-3 involves transforming the (N, 3, 3, Feature) tensor to produce 90 LNT features. From hop-4, 10 additional LNT features are acquired following a DFT-based feature selection. These 350 LNT features from hops 1, 3, and 4 are concatenated alongside selected hop-4 features. Finally, features aggregated from all four experimental setups are combined, and a 10-class classifier is trained on these comprehensive feature sets, demonstrating an improvement in classification performance.

By |March 23rd, 2025|News|Comments Off on MCL Research on Enhanced Object Detection|

MCL Research on Video Camouflaged Object Detection (VCOD)

Video Camouflaged Object Detection (VCOD) focuses on identifying and segmenting objectsconcealed within the background scenes. These camouflaged objects closely resemble theirsurroundings by mimicking similar color patterns and textures, which poses significant challengescompared to conventional detection tasks.To address this problem, we have proposed a motion-enhanced approach that progressivelyrefines the detection results with multi-resolution search and motion-guided boosting. The videoframe is first screened under image level, and the inter-frame motions and background models are then corrected, considering all the video sequences. This method provides stable performance under popular VCOD datasets.

By |March 16th, 2025|News|Comments Off on MCL Research on Video Camouflaged Object Detection (VCOD)|

MCL Research on Image Dehazing

Image dehazing plays a crucial role in digital imaging by removing atmospheric distortions such as haze and fog, thereby enhancing scene clarity for applications ranging from photography to autonomous driving. Traditionally, methods like the Dark Channel Prior (DCP) have been used to estimate haze effects, leveraging the observation that most non-hazy images contain very dark pixels in at least one color channel.

In a significant advancement, researchers have now introduced a novel approach that combines the strength of DCP with the efficiency of the GUSL pipeline. In this new method, DCP serves as the foundational technique to provide an initial estimate of the haze, while the GUSL pipeline is employed to predict and correct the residual errors left by the DCP. This two-step process refines the dehazing process by capturing subtle details that DCP alone might miss.

The GUSL pipeline utilizes unsupervised representation learning for robust feature extraction, followed by supervised feature learning to enhance computational efficiency and output quality. This approach not only improves the overall dehazing performance but also maintains a lightweight design suitable for real-time applications on resource-constrained devices.

By integrating DCP with residue prediction through GUSL, the new method delivers superior image clarity with reduced computational overhead, making it an attractive solution for modern imaging challenges in mobile and edge computing environments.

By |March 9th, 2025|News|Comments Off on MCL Research on Image Dehazing|

Reunion of MCL Alumni at Southern California

Professor C.-C. Jay Kuo and MCL alumni at Southern California had a reunion at theCapital Seafood Irvine Spectrum on 1 March (Saturday) for lunch. Victor Liangorganized the event. The attendees included Professor Kuo, Victor Liang, Kyle Lai, JingZhang, Joe Lin, and Chung-Ting Huang’s family. They had a wonderful time sharingtheir recent activities.

By |March 2nd, 2025|News|Comments Off on Reunion of MCL Alumni at Southern California|

MCL Research on Image Demosaicing

Demosaicing is a critical process in digital imaging. Since each pixel on a typical sensor captures only one color channel, red, green, or blue, the complete color image must be reconstructed from incomplete data. Conventional approaches, including deep learning models, have made impressive strides in quality, yet their significant computational requirements often limit their deployment on resource-constrained edge devices.Mahtab Movahhedrad of the MCL Lab has introduced an innovative approach to digital imaging that promises to transform how devices reconstruct full-color images from partial sensor data. The new method, dubbed green U-shaped image demosaicing (GUSID), leverages green learning (GL) principles to offer a lightweight, transparent, and efficient alternative to traditional, computationally heavy deep learning techniques. GUSID takes a distinct path. Instead of relying on deep neural networks, it uses unsupervised representation learning for robust feature extraction, followed by supervised feature learning to enhance computational efficiency and maintain high-quality output. This dual-stage process allows GUSID to minimize computational overhead while delivering competitive accuracy. Its compact design and support for parallelized training make it particularly well-suited for real-time vision applications on devices with limited processing capabilities. As digital imaging continues to evolve, breakthroughs like GUSID not only enhance performance but also pave the way for future innovations in edge computing and real-time processing. The MCL Lab is poised to lead this exciting frontier, proving that sometimes, a smarter, leaner approach can make all the difference.

By |February 23rd, 2025|News|Comments Off on MCL Research on Image Demosaicing|

MCL Research on Transfer Learning

Transfer learning aims to reduce the size of the labeled training samples by leveraging existing knowledge from one domain, called the source domain, and using the learned 

knowledge to construct models for another domain, called the target domain. In particular, unsupervised domain adaptation (UDA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.Most existing UDA methods rely on deep learning, primarily pre-trained models, adversarial networks, and transformers. 

We propose an interpretable and lightweight transfer learning (ILTL) method. It consists of two modules. The first module deals with image-level alignment to ensure visually similar images across domains, which performs image processing to minimize structural differences between the source and target images.The second module focuses on feature-level alignment, which identifies the discriminant feature subspace, uses the feature distance to transfer source labels to target samples, and then conducts class-wise alignment in the feature subspace. ILTL can be performed in multiple rounds to enhance the alignment of source and target features. We benchmark ILTL and deep-learning-based methods in classification accuracy, model sizes, and computational complexity in two transfer learning datasets. Experiments show that ILTL can achieve similar accuracy with smaller model sizes and lower computational complexity, while its interpretability provides a deeper understanding of the transfer learning mechanism.

By |February 16th, 2025|News|Comments Off on MCL Research on Transfer Learning|