Recently, Professor Kuo and his students at MCL proposed a new machine learning methodology called successive subspace learning (SSL). The methodology has been widely adopted in MCL to solve image processing and computer vision problems. In 3D domain, we have observed a great success in point cloud classification task. In the PointHop paper, we develop an explainable machine learning method for point cloud classification. The classification baseline is composed by four PointHop units, we construct the local-to-global attribute building process and use saab transform to control the dimension growth in each unit. We compare the test performance on ModelNet40 with the state-of-the-art methods, our method obtains comparable performance with the others while demands much less training time. For instance, PointNet costs about 5 hours to train, while ours only takes 20 minutes to train on the same dataset. The advantages of the methodology are very clear: interpretable and much less computation complexity.
The success in point cloud classification encourages us to go deeper in 3D domain. Therefore, we further look at the segmentation task which needs to assign label to each point in the point cloud. Referring to the common image segmentation network, we use the point cloud classification baseline as an encoder and add a decoder to complete segmentation. After building local neighboring regions and extracting local attributes from neighboring points in the encoder, the features are interpolated back to finest scale layer by layer with skip connections between same scales in the decoder. Also, saab transform is adopted between layers as feedfoward convolution to control the rapid growth of the feature dimension.
Our method also has the advantage of task-agnostic ability. Specifically, by learning the parameters in a one-pass manner, our method is designed to complete point cloud classification and segmentation tasks at the same time. Multi tasks are required to be completed in the real scenarios, however deep network methods are usually task-specific designed. It is time and money consuming to train the model separately for different tasks.
Author: Min Zhang