This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Scene Structure Classification as Preprocessing for Feature-Based Visual Odometry

Journal Article
2018-01-0610
ISSN: 1946-4614, e-ISSN: 1946-4622
Published April 03, 2018 by SAE International in United States
Scene Structure Classification as Preprocessing for Feature-Based Visual Odometry
Sector:
Citation: Rawashdeh, N., Aladem, M., Baek, S., and Rawashdeh, S., "Scene Structure Classification as Preprocessing for Feature-Based Visual Odometry," SAE Int. J. Passeng. Cars – Electron. Electr. Syst. 11(3):231-239, 2018, https://doi.org/10.4271/2018-01-0610.
Language: English

Abstract:

Cameras and image processing hardware are rapidly evolving technologies, which enable real-time applications for passenger cars, ground robots, and aerial vehicles. Visual odometry (VO) algorithms estimate vehicle position and orientation changes from the moving camera images. For ground vehicles, such as cars, indoor robots, and planetary rovers, VO can augment movement estimation from rotary wheel encoders. Feature-based VO relies on detecting feature points, such as corners or edges, in image frames as the vehicle moves. These points are tracked over frames and, as a group, estimate motion. Not all detected points are tracked since not all are found in the next frame. Even tracked features may not be correct since a feature point may map to an incorrect nearby feature point. This can depend on the driving scenario, which can include driving at high speed or in the rain or snow. This article investigates the effect of image structural content on the performance of feature tracking and motion estimation from known VO algorithms. As a preprocessing step, the image frame is divided into regions of three classes: Transient, Texture, and Random. The number of tracked features does differ in these regions, as validated by the presented results. VO algorithms can fail intermittently when too few detected points contribute to tracking, where the remaining points are false matches and are outliers to the motion estimator. Exclusion of these poor corners in advance can increase the robustness of the algorithms.