The increasing demand for rendering smooth and plausible 3D motion is fueling the development of motion capture (mocap) systems. This new format of high quality 3D motion data has paved its way into animation movies, high-end computer games, biomechanics and robotics (see the accompanying photo from [1]). Diverse applications and the rapid development of mocap systems have resulted in a large corpus of data in recent years. Automatic classification of the mocap data is essential for various database management tasks such as segmentation, indexing and retrieval.
Mr. Harshad Kadu, a MCL PhD student, and Professor C.-C. Jay Kuo recently proposed a new framework to tackle the human mocap data classification problem using novel spatial/temporal feature representations, machine learning and decision fusion concepts. The proposed solution methods are tested on a wide variety of sequences from the CMU mocap database, and a correct classification rate of 99.6% is achieved, which is the state-of-the-art technique in the field. This work has been accepted for publication in the IEEE Trans. on Multimedia.
[1] http://www.noldus.com/innovationworks/content/automated-behavior-recognition-in-humans