MCL Research Presented at WACV 2026
MCL members, Jintang Xue and Kevin Yang, presented their papers at the Winter Conference on Applications of Computer Vision (WACV) 2026, Tucson, AZ, USA.
The title of Jintang et al.’s paper is “Descrip3D: Enhancing Large Language Model-based 3D Scene Understanding with Object-Level Text Descriptions”. Here is a brief summary:
“Understanding 3D scenes goes beyond simply recognizing objects; it requires reasoning about the spatial and semantic relationships between them. Current 3D scene-language models often struggle with this relational understanding, particularly when visual embeddings alone do not adequately convey the roles and interactions of objects. In this paper, we introduce Descrip3D, a novel and powerful framework that explicitly encodes the relationships between objects using natural language. Unlike previous methods that rely only on 2D and 3D embeddings, Descrip3D enhances each object with a textual description that captures both its intrinsic attributes and contextual relationships. These relational cues are incorporated into the model through a dual-level integration: embedding fusion and prompt-level injection. This allows for unified reasoning across various tasks such as grounding, captioning, and question answering, all without the need for task-specific heads or additional supervision. When evaluated on five benchmark datasets, including ScanRefer, Multi3DRefer, ScanQA, SQA3D, and Scan2Cap, Descrip3D consistently outperforms strong baseline models, demonstrating the effectiveness of language-guided relational representation for understanding complex indoor scenes.”
Kevin’s paper is entitled “SVD-Det: A Lightweight Framework for Video Forgery Detection Using Semanticand Visual Defect Cues”, co-authored with Tianyu Zhang, Feng Qian, Bing Yan, and C.-C. Jay Kuo. The summary goes as follows:
“With the rapid proliferation of AI-generated content (AIGC) on multimedia platforms, efficient and reliable video forgery detection has become increasingly important. Existing approaches often rely on either visual artifacts or semantic inconsistencies, but suffer from high computational costs, [...]









