Point cloud registration refers to the process of aligning two point clouds. The two point clouds to be aligned are commonly called source and target. The goal is to find a spatial transformation (3D rotation and translation) that needs to be applied to the source to optimally align it with the target.  Registration has become popular with the proliferation of 3D scanning devices like LiDAR and their applications in autonomous driving, robotics, graphics, mapping, etc.

Point clouds need to be registered in order to merge data from different sensors to obtain a globally consistent view, mapping a new observation to known data, etc. Registration is challenging due to several reasons. The source and the target point clouds may have different sampling densities and different number of points. Point clouds may contain outliers and/or be corrupted by noise. Sometimes, only partial views are available.

The problem of registration (or alignment) has been studied for a long while. Prior to point cloud processing, the focus has been on aligning lines, parametric curves and surfaces. The classical Iterative Closest Point (ICP) algorithm alternates between finding corresponding points and estimating the optimal rotation and translation. ICP just uses the spatial coordinates of points to establish point correspondences. More recently there has been a trend to use deep learning, feature based methods for registration. Two such popular methods include PointNetLK and Deep Closest Point (DCP). PointNetLK and DCP treat registration as a supervised learning problem and train end-to-end networks using deep learning. The supervision is in terms of class labels and ground truth rotation matrix and translation vector. We propose a method called ‘Salient Points Analysis (SPA)’ [1] for registration.  In contrast with the recent deep learning methods, our SPA method is completely unsupervised.

SPA leverages the PointHop++ method which was recently proposed by MCL to learn features from the source and target point clouds in an unsupervised manner. Then, a small subset of salient points is found using the local geometry properties of points. We show that these salient points are representative enough the estimate the transformation of the entire point cloud. These points are used to find point correspondences by nearest neighbor search in feature space. After that the correspondences are used to find the optimal rotation and translation using Singular Value Decomposition.

Results on benchmark ModelNet-40 dataset show that our method is much more favorable than ICP and it also has comparable performance with deep learning methods. Our model size is just 64kB while that of DCP is 21MB. Our training time is less than 30 minutes even without using GPU resources.

 

[1] Kadam, P., Zhang, M., Liu, S., & Kuo, C. C. J. (2020). Unsupervised Point Cloud Registration via Salient Points Analysis (SPA). arXiv preprint arXiv:2009.01293.