Open Access

3D-3D Self-Calibration of Sensors Using Point Cloud Data

Journal Article
2021-01-0086
ISSN: 2641-9645, e-ISSN: 2641-9645
Published April 06, 2021 by SAE International in United States
3D-3D Self-Calibration of Sensors Using Point Cloud Data
Sector:
Citation: Ravindranath, P., Buyukburc, K., and Hasnain, A., "3D-3D Self-Calibration of Sensors Using Point Cloud Data," SAE Int. J. Adv. & Curr. Prac. in Mobility 3(3):1369-1377, 2021, https://doi.org/10.4271/2021-01-0086.
Language: English

Abstract:

Self-calibration of sensors has become highly essential in the era of self-driving cars. Reducing the sensors’ errors increases the reliability of the decisions made by the autonomous systems. Various methods are currently under investigation but the traditional methods still prevail which maintain a strong dependency on human experts and expensive equipment that consume significant amounts of labor and time. Recently, various calibration techniques proposed for extrinsic calibration for Autonomous Vehicles (AVs) mostly rely on the camera 2D images and depth map to calibrate the 3D LiDAR points. While most methods work with the whole frame, some methods use the objects in the frame to perform the calibration. To the best of our knowledge, majority of these self-calibration methods rely on using actual or falsified ground truth values.
We propose a 3D-3D point cloud based continuous self-calibration approach that uses one or many objects identified in the sensor frames to cross-calibrate sensors without any reliance on initial calibration parameters or ground truth values. Considering the fact that multiple sensors have multiple views of the same scene, the common features or objects within the scene can be used to calibrate one sensor node with respect to the other. Our approach relies on point cloud data (PCD) generated from at least two sensors to cross-calibrate the mis-calibrated sensor with respect to the calibrated sensor. In this paper, we demonstrate that we can apply the method either by extracting the essential feature points of the object PCD (i.e., centroids of the objects) or the whole object PCD. Following which we optimize the cost function to obtain the extrinsic calibration parameters. Our method handles the problem of pose correction similar to calibration without any ground truth value from sensors just by using multiple consecutive frames. In comparison to other methods, our method performs calibration with low margins of errors in rotation and translation. This method has been tested on the publicly available KITTI dataset, which is widely used to assess various problems (object detection, segmentation, tracking, depth prediction etc.) related to self-driving cars. The simplicity of the method lets us do on-the-fly calibration of sensors with a high level of accuracy.