Safe navigation of an autonomous vehicle (AV) requires a fast and correct
perception of its driving environment. Meaning, the AV needs to persistently
detect and track moving objects around it with high accuracy for safe
navigation. These tasks of detection and tracking are performed by the AV
perception system that utilizes data from sensors such as LIDARs, radars, and
cameras. The majority of AVs are typically fitted with multiple sensors to
create redundancy and avoid dependence on a single sensor. This strategy has
been shown to yield accurate perception results when the sensors work well and
are calibrated correctly. However, over time, the cumulative use of the AV or
poor placement of sensors may lead to faults that need correcting. This article
proposes an online algorithm that corrects the faulty perception of an AV by
determining a set of transformations that would align a cluster of measurements,
from a moving vehicle in the scene to a corresponding detection in an image
taken by the synchronized, forward-facing camera of the AV. The correction
algorithm is first tested, assuming the availability of ground truth information
to correct the LIDAR, and then tested with camera images which are used to
determine ground truth. The comparison metric between expected and optimal
parameters is the mean absolute error (MAE). The translation, scale, and
orientation errors between the expected and optimal parameters when using ground
truth data in the correction algorithm are 9.41 × 10–4 m, 3.84 ×
10–7, and 3.82 × 10–2 degrees, respectively; and the
errors for camera images are 0.414 m, 0.017, and 0.007 degrees,
respectively.