Mobile Robot Localization Evaluations with Visual Odometry in Varying Environments Using Festo-Robotino

2020-01-1022

04/14/2020

Event
WCX SAE World Congress Experience
Authors Abstract
Content
Autonomous ground vehicles can use a variety of techniques to navigate the environment and deduce their motion and location from sensory inputs. Visual Odometry can provide a means for an autonomous vehicle to gain orientation and position information from camera images recording frames as the vehicle moves. This is especially useful when global positioning system (GPS) information is unavailable, or wheel encoder measurements are unreliable. Feature-based visual odometry algorithms extract corner points from image frames, thus detecting patterns of feature point movement over time. From this information, it is possible to estimate the camera, i.e., the vehicle’s motion. Visual odometry has its own set of challenges, such as detecting an insufficient number of points, poor camera setup, and fast passing objects interrupting the scene. This paper investigates the effects of various disturbances on visual odometry. Moreover, it discusses the outcomes of several experiments performed utilizing the Festo-Robotino robotic platform. The experiments are designed to evaluate how changing the system’s setup will affect the overall quality and performance of an autonomous driving system. Environmental effects such as ambient light, shadows, and terrain are also investigated. Finally, possible improvements including varying camera options and programming methods are discussed.
Meta TagsDetails
DOI
https://doi.org/10.4271/2020-01-1022
Pages
9
Citation
Abdo, A., Ibrahim, R., and Rawashdeh, N., "Mobile Robot Localization Evaluations with Visual Odometry in Varying Environments Using Festo-Robotino," SAE Technical Paper 2020-01-1022, 2020, https://doi.org/10.4271/2020-01-1022.
Additional Details
Publisher
Published
Apr 14, 2020
Product Code
2020-01-1022
Content Type
Technical Paper
Language
English