Congratulations to Pranav Kadam for passing his defense on Mar. 22. Pranav’s thesis is titled “Green Learning for 3D Point Cloud Data Processing”. His Dissertation Committee includes Jay Kuo (Chair), Antonio Ortega, and Aiichiro Nakano (Outside Member). Here we invite Pranav to share about his PhD thesis and his PhD experience.
Thesis Abstract:
3D Point Cloud processing and analysis has attracted a lot of attention in present times due to the numerous applications such as in autonomous driving, computer graphics, and robotics. In this dissertation, we focus on the problems of point cloud registration, pose estimation, rotation invariant classification, odometry and scene flow estimation. These tasks are important in the realization of a 3D vision system. Rigid registration aims at finding a 3D transformation consisting of rotation and translation that optimally aligns two point clouds. The next two tasks focus on object-level analysis. For pose estimation, we predict the 6-DOF pose of an object with respect to a chosen frame of reference. Rotation invariant classification aims at classifying 3D objects which are arbitrarily rotated. The latter two problems are for outdoor environments. In odometry, we want to estimate the incremental motion of an object using the point cloud scans captured by it at every instance. While the scene flow estimation task aims at determining the point-wise flow between two consecutive point clouds.
3D perception using point clouds is dominated by deep learning methods nowadays. However, large scale learning on point clouds with deep learning techniques has several issues which are often overlooked. This research is based on the green learning (GL) paradigm and focuses on interpretability, smaller training times and smaller model size. Using GL, we separate the feature learning process from the decision. Features are derived in an unsupervised feedforward manner from the statistics of the training data. For the decision part, we mainly use well established model-free techniques which are optimized during inference. When the decision process involves classification, a lightweight classifier is trained. Overall, the proposed methods can be trained within an hour on CPUs and the number of model parameters are much fewer than deep learning methods. These advantages are promising keeping in mind applications that demand low power and complexity, such as in edge computing.
PhD experience:
I would like to express my gratitude to my parents and my advisor, Prof. C.-C. Jay Kuo, without whom I wouldn’t have made it to this point. I feel very fortunate to have had the opportunity to work under the guidance of Prof. Kuo. I feel the goal of PhD is beyond publishing several papers in good conferences/journals or improving the state of the art by a percent. Of course, these things are crucial for making progress in the program, but I am more happy about the lessons and values I learned from Prof. Kuo. They are helping me a lot in my professional career. The MCL family is large, and everyone is very kind and willing to help. The lab culture is very conducive for the student’s overall growth. Finally, for all the PhD students out there, I would say stay focused, be persistent, and keep working hard. All the best!