Odometry is the process of using motion sensors to estimate the change in position of an object over time. It has been widely studied in the context of mobile robots, autonomous vehicles, drones, and other moving agents. Traditional odometry based on motion sensors such as Inertial Measurement Unit (IMU) and magnetometers is prone to error accumulation over time, known as odometry drift. Visual odometry makes use of camera images and/or point cloud scans collected over time to determine the position and orientation of the moving object. Several visual odometry systems that integrate monocular, stereo vision, point clouds and IMU have been developed for object localization.

We propose an unsupervised learning method for visual odometry from LiDAR point clouds called Green Point Cloud Odometry (GPCO). GPCO follows the traditional scan matching based approach to solve the odometry problem by incrementally estimating the motion between two consecutive point cloud scans. The GPCO method can be divided into four steps. First, a geometry-aware sampling method selects a small subset of points from the input point clouds. To do so, the eigen features of points in a local neighborhood are considered followed by random point sampling. Next, the 3D view surrounding the moving object is partitioned into four parts representing the front, rear, left and right-side view. The view-partitioning step divides the sampled points into four disjoint sets. The features of the sampled points are derived using the PointHop++ [1] method. Matching points between two consecutive point clouds are found in each view using nearest neighbor rule in the feature space. Finally, the motion between the two scans is estimated using Singular Value Decomposition (SVD). The motion is updated to the estimates from previous times and the process repeats.

Experiments are conducted on the KITTI Visual Odometry challenge for autonomous driving. GPCO outperforms comparable supervised deep learning methods in performance and moreover, it uses as less as 0.5% training data. The training time is as little as 10 minutes while the model size is 75kB. This makes GPCO a green solution to point cloud odometry.

— Pranav Kadam

Reference

[1] Zhang, Min, Yifan Wang, Pranav Kadam, Shan Liu, and C-C. Jay Kuo. “Pointhop++: A lightweight learning model on point sets for 3d classification.” In 2020 IEEE International Conference on Image Processing (ICIP), pp. 3319-3323. IEEE, 2020.