News

MCL Research Presented at WACV 2026

MCL members, Jintang Xue and Kevin Yang, presented their papers at the Winter Conference on Applications of Computer Vision (WACV) 2026, Tucson, AZ, USA.

The title of Jintang et al.’s paper is “Descrip3D: Enhancing Large Language Model-based 3D Scene Understanding with Object-Level Text Descriptions”. Here is a brief summary:

“Understanding 3D scenes goes beyond simply recognizing objects; it requires reasoning about the spatial and semantic relationships between them. Current 3D scene-language models often struggle with this relational understanding, particularly when visual embeddings alone do not adequately convey the roles and interactions of objects. In this paper, we introduce Descrip3D, a novel and powerful framework that explicitly encodes the relationships between objects using natural language. Unlike previous methods that rely only on 2D and 3D embeddings, Descrip3D enhances each object with a textual description that captures both its intrinsic attributes and contextual relationships. These relational cues are incorporated into the model through a dual-level integration: embedding fusion and prompt-level injection. This allows for unified reasoning across various tasks such as grounding, captioning, and question answering, all without the need for task-specific heads or additional supervision. When evaluated on five benchmark datasets, including ScanRefer, Multi3DRefer, ScanQA, SQA3D, and Scan2Cap, Descrip3D consistently outperforms strong baseline models, demonstrating the effectiveness of language-guided relational representation for understanding complex indoor scenes.”

Kevin’s paper is entitled “SVD-Det: A Lightweight Framework for Video Forgery Detection Using Semanticand Visual Defect Cues”, co-authored with Tianyu Zhang, Feng Qian, Bing Yan, and C.-C. Jay Kuo. The summary goes as follows:

“With the rapid proliferation of AI-generated content (AIGC) on multimedia platforms, efficient and reliable video forgery detection has become increasingly important. Existing approaches often rely on either visual artifacts or semantic inconsistencies, but suffer from high computational costs, [...]

By |April 5th, 2026|News|Comments Off on MCL Research Presented at WACV 2026|

MCL Research on Medical Image Classification

We propose the development of a high-efficiency foundation model tailored for the MedMNIST v2 benchmark, utilizing a novel architecture based on Multi-Resolution Tree-Structured Vector Quantization (TSVQ). While current foundation models often rely on computationally expensive transformers, our approach focuses on a hierarchical quantization strategy. By employing multi-resolution codebooks, we can effectively capture and represent both long-range structural dependencies and intricate, short-range local correlations inherent in diverse medical imaging modalities, from pathology slides to radiological scans.

The core innovation lies in the tree-structured organization of the latent space. Unlike flat codebooks used in traditional VQ-VAEs, TSVQ offers a logarithmic search complexity, significantly reducing the energy required for both training and inference. This alignment with “Green Learning” principles ensures that our model achieves state-of-the-art representation fidelity without the massive carbon footprint typically associated with large-scale AI. By optimizing the codebook search and minimizing redundant parameters, we aim to demonstrate that high-performance medical AI can be both sustainable and accessible on modest hardware.

This framework serves as a robust, domain-agnostic foundation. The learned representations are designed to be highly transferable, enabling the model to excel across a spectrum of downstream tasks. Crucially, this architecture addresses the “small data” problem in clinical medicine; by pre-training on the comprehensive MedMNIST suite, the model can be fine-tuned on smaller, domain-specific clinical datasets with superior accuracy and stability. Ultimately, we aim to expand this green learning paradigm to broader healthcare applications, empowering the medical community with scalable, low-power, and high-precision diagnostic tools.

By |March 29th, 2026|News|Comments Off on MCL Research on Medical Image Classification|

MCL Research on Green Image Generation

Although generative adversarial networks (GANs) and diffusion models achieve impressive realism through neural networks and backpropagation, the learned representations and latent spaces lack clear attribution and interpretability. Understanding the generation process requires auxiliary probing or post-hoc analysis. Empirical studies suggest that some models implicitly follow a coarse-to-fine generation mechanism, in which early stages determine the global structure and layout, and later stages progressively refine details and inject texture and style. This work explicitly formalizes this mechanism and presents a feed-forward image generation (FIG) process with well-defined objectives at each stage. FIG statistically models the lowest-resolution images and then progressively refines them. Unlike neural generative models, FIG provides explicit interpretability and attribution. Each generated region can be traced to its associated source and refinement. This property facilitates controllability, supports privacy-aware generation without retraining, and allows transparent manipulation of generated content. Compared to representative baselines based on GAN and diffusion, FIG achieves competitive visual quality, offers superior interpretability, and improves robustness in data-sparse regimes.

We introduced FIG, an interpretable and fully feed-forward image generation framework that formalizes the separation between global structure modeling and local detail refinement. FIG records pixel-level attribution throughout multi-resolution enhancement, enabling each synthesized detail to be attributed, controlled, or selectively modified. This design enables control over the global appearance, semantic attributes, and localized regions without affecting the rest of the image. Furthermore, the transparent retrieval process supports source-aware filtering, allowing selective exclusion of sensitive training samples without retraining. Extensive benchmark results demonstrate that FIG maintains competitive generation quality while offering these interpretability and controllability.

By |March 22nd, 2026|News|Comments Off on MCL Research on Green Image Generation|

MCL Research on EEG Data Analysis

Our work starts with a simple idea: the way different brain regions communicate can reveal a lot about what the brain is doing. Instead of treating EEG as a collection of separate channels, we view it as a network and study the connectivity patterns between regions. This kind of representation is useful because different brain states often produce different connectivity structures. In other words, brain connectivity maps can provide a more intuitive and informative picture of neural activity than raw signals alone.

In one of our studies, we use the direct Directed Transfer Function (dDTF) to build these maps. A key advantage of dDTF is that it captures not only whether two brain regions are related, but also the direction of information flow between them. This makes it a good tool for describing dynamic interactions in the brain. In particular, these connectivity patterns for mental workload show clear differences across conditions. As illustrated in Fig. 2, low and high workload states already present visibly different connectivity maps across several frequency bands, suggesting that they contain meaningful information for distinguishing cognitive states.

Based on this observation, we developed the framework shown in Fig. 1. We first decompose the EEG signals into multiple frequency bands and construct a connectivity map for each band. These multiband maps are then combined into a unified feature representation. From there, we progressively refine the features, selecting the most informative ones and transforming them into a more discriminative space before making the final prediction. In this way, our method leverages interpretable brain connectivity patterns while keeping the overall learning pipeline efficient, practical, and easy to extend to other EEG analysis tasks.

By |March 15th, 2026|News|Comments Off on MCL Research on EEG Data Analysis|

MCL Research on Variable-Length Word Embeddings

We propose Variable-Length Word Embeddings, a POS-aware and compute-efficient Word2Vec training framework. Traditional embeddings assign the same dimensionality to every token, even though different parts of speech contribute very differently to sentence meaning. In real text, nouns usually carry the main semantic content, verbs encode actions and relations, while many other categories (e.g., articles, prepositions, conjunctions) are comparatively low-information. This motivates a representation strategy that spends more capacity on important words and less capacity on the rest.

Our core idea is to use POS tags to organize training data and allocate embedding dimensions accordingly. We first POS-tag the entire corpus and split it into three views: a noun-only corpus, a noun+verb corpus, and a full corpus containing all tokens. Instead of training one uniform embedding space, we build embeddings in stages so that nouns become the backbone, verbs are learned relative to that backbone, and the remaining words are learned with minimal capacity.

We train nouns progressively with increasing dimensionality. Specifically, we learn noun embeddings at 50D, 100D, and 200D on the noun-only corpus. To make training across dimensions stable and efficient, each higher-dimensional model is initialized from the previous lower-dimensional embeddings using Lanczos interpolation (50D → 100D, and 100D → 200D), and then refined on the noun-only corpus. This produces high-capacity noun representations while preserving continuity across stages.

After obtaining the noun backbone at each dimension, we introduce verbs through a controlled adaptation step. Using the noun+verb corpus, we train verbs on top of the noun space, where noun vectors are soft-frozen (implemented with a reduced update factor) so they remain stable but can still adjust slightly. Verbs, in contrast, are fully trainable and learn to align with noun semantics. We apply this procedure at 50D [...]

By |March 8th, 2026|News|Comments Off on MCL Research on Variable-Length Word Embeddings|

MCL Research on Renal Image Segmentation

AI-driven medical imaging has emerged as a transformative force in modern healthcare, empowering clinicians to deliver more accurate, efficient, and personalized diagnostic and therapeutic strategies. By automatically analyzing CT, MRI, ultrasound, and other imaging modalities, AI systems can identify subtle patterns, precisely segment anatomical structures, and support clinical decision-making with enhanced accuracy and consistency. These advancements not only improve diagnostic performance but also streamline clinical workflows and ultimately elevate the overall quality of patient care.

Among the many tasks in medical image analysis, kidney and kidney tumor segmentation are pivotal for the management of renal diseases, particularly renal cell carcinoma. Precise delineation of kidneys and tumors is essential for quantitative tumor assessment, treatment planning, surgical navigation, postoperative monitoring, and radiomics research. Accurate segmentation enables clinicians to reliably estimate tumor burden, evaluate tumor–organ spatial relationships, and facilitate nephron-sparing surgical strategies, all of which directly influence patient outcomes. Given that manual segmentation is labor-intensive and prone to inter- and intra-observer variability, the development of automated, robust, and reliable segmentation methods has become increasingly critical for both routine clinical practice and large-scale research.

To address these limitations, we propose 3D-Cube Multi-Stage Green U-shaped Learning (GUSL), a novel multi-stage feed-forward machine learning framework for 3D medical image segmentation without backpropagation. GUSL is designed to be computationally efficient, interpretable, and environmentally sustainable, while maintaining competitive segmentation performance.

The proposed framework adopts a cascaded multi-stage segmentation strategy tailored to different anatomical tasks. As illustrated in Figure 1, distinct stages are designed for coarse-to-fine segmentation. First, the original CT volume is downsampled to a lower resolution, enabling efficient coarse localization of the kidney with reduced computational complexity. This low-resolution stage provides an approximate spatial position of the kidney, as highlighted in the red box. [...]

By |March 1st, 2026|News|Comments Off on MCL Research on Renal Image Segmentation|

MCL Research on Renal Imaging Analysis

Our paper proposes a multi-stage Green U-shaped Learning (GUSL) framework for efficient and reliable IHC image quantification. As shown in the overall pipeline (Stage I–III), the system starts with the preprocessing of the input IHC image using normalization and PCA. In Stage I, marker-specific GUSL modules learn mpIF-informed intermediate representations, such as LAP2 and KI67-related cues, from co-registered training data. In Stage II, these representations are integrated by a dedicated GUSL module to generate a cell/background segmentation map in a coarse-to-fine and residual refinement manner. In Stage III, connected cell regions are extracted, and cell-level classification is performed to determine whether each cell is biomarker-positive or biomarker-negative. The entire framework follows a feedforward, modular design without end-to-end backpropagation, reducing computational cost while keeping the system transparent and interpretable.

Qualitative examples are shown in the second figure. Column (a) presents the input brightfield IHC images. Column (b) shows the ground-truth cell segmentation and biomarker labels. Columns (c) and (d) compare results from a representative deep learning baseline and our GUSL method. We observe that the proposed method produces clearer cell boundaries and more consistent positive/negative classification, especially in crowded regions and low-contrast areas. These visual results are consistent with our quantitative evaluation, which demonstrates competitive segmentation accuracy and improved quantification agreement, while using much lower model complexity and energy consumption.

By |February 22nd, 2026|News|Comments Off on MCL Research on Renal Imaging Analysis|

MCL Research on Whole Slide Image Analysis

Histopathologic analysis is a key confirmatory step in the cancer diagnosis pipeline, where pathologists analyze a tissue section of interest for abnormalities and the extent of disease progression. The digitization of these tissue slides has enabled the use of AI for Whole Slide Image (WSI) Analysis, primarily to simplify pathologists’ laborious tasks and improve diagnostic accuracy. Due to the large size of these images, they’re split into smaller patches, and after analysis of individual patches, the results are aggregated to get the result for the WSI. Such a paradigm is called Multiple Instance Learning (MIL).

Architectural patterns surrounding the tumor regions are key indicators of angiogenesis and help in prognosis prediction. Architectural patterns are classified into 9 types and are grouped into three categories based on the underlying vasculature. While some patterns are often seen, others are rare and common only in higher-grade tumors. This causes a data imbalance that may affect training. To overcome this challenge, we propose an ensemble classifier for architectural pattern classification.

To capture local details and global context, we employ a multi-resolution feature encoder. At each resolution, the Saab transform is applied to obtain joint spatial-spectral representations. Representation learning is followed by a pooling operation to obtain compact representations. The pooled features from each resolution are concatenated to obtain a single feature vector, which is used to select the most discriminant features for the target classification task. An XGBoost binary classifier trained on these selected features predicts a confidence score for each architectural pattern. The confidence scores from multiple one-vs-one classifiers are aggregated to predict the architectural pattern in each patch.

By |February 15th, 2026|News|Comments Off on MCL Research on Whole Slide Image Analysis|

MCL Research on Video Quality Assessment

We are proposing video quality assessment using green learning principles, with the objective of identifying visual distortions while minimizing computational and energy costs. Instead of relying on global frame analysis or large models, the approach emphasizes efficient, local feature extraction that captures distortion-related characteristics. By analyzing color variations, edges, textures, and structural changes at the patch level, the system is designed to detect degradations caused by compression, processing, or tampering in a scalable and sustainable manner.

To further improve efficiency and discriminative power, our group proposes a method named DFT to identify the most informative features. After spatial filtering, features are transformed into the frequency domain, where DFT is used to analyze their spectral behavior and assign importance scores. This allows the model to focus on frequency components that are most sensitive to distortions while discarding redundant information. The selected features are then used to train a lightweight machine learning model and evaluated on unseen videos, ensuring a balance between accuracy, interpretability, and green learning objectives.

By |February 8th, 2026|News|Comments Off on MCL Research on Video Quality Assessment|

MCL Research on Green Image Coding

One of the key components of Green Image Coding (GIC) is multi-grid control, which enables efficient and scalable bit allocation across the framework’s hierarchical layers. Unlike traditional hybrid codecs designed for single-layer encoding, GIC decomposes images into multiple hierarchical layers via resampling, referred to as a multi-grid representation. This decomposition effectively redistributes energy and reduces intra-layer content diversity, but it also creates a complex high-dimensional optimization challenge when attempting to allocate bits optimally across these various layers.

To make this problem tractable, we establish a theoretical foundation by defining the relationship between local and global rate-distortion (RD) models. We demonstrate that the global RD model can be derived from the local RD model of an individual layer by applying specific offsets to both rate and distortion. Notably, the distortion offset is a constant value determined by up-sampling processes and is unrelated to the compression process itself. This theoretical breakthrough reduces an intractable high-dimensional problem into a set of manageable sequential decisions.

Based on these findings, GIC implements a practical slope-matching-based rate control strategy. This strategy allocates bits across multiple grids by matching the slopes of consecutive RD curves. A primary advantage of this design is its modularity; the rate control module only requires information from two consecutive layers to function. This allows the module to be easily duplicated for any number of layers in the encoder, effectively decomposing the global rate-distortion optimization into a sequence of local optimizations to ensure a scalable balance between bit rate and image distortion.

By |February 1st, 2026|News|Comments Off on MCL Research on Green Image Coding|