Estimating Unmanned Aerial System Pose Using Cast Shadows and Computer Vision Techniques
2024-01-1931
03/05/2024
- Features
- Event
- Content
- Although there have been significant advancements in vision-based localization techniques over recent years, there are still problems that need to be addressed. One of these problems is localization in dynamically illuminated environments, like one might find when a small unmanned aerial system (sUAS) equipped with a lighting payload attempts to autonomously navigate inside a dark, damaged structure. When visual odometry (VO) methods are implemented in a dynamically illuminated environment, the accuracy of the state estimation degrades because the shadows are improperly identified as features and these shadow-features move in a different manner than static objects in the environment. As a result, sUAS pose estimates often accumulate errors without bound. This work will examine the merits and demerits inherent in conventional or prevailing sUAS self-localization techniques in dark environments. Concurrently, it introduces a novel shadow-based localization methodology capable of augmenting these established techniques. This approach capitalizes on the shadows cast by a light source affixed to a sUAS. The establishment of a communication bridge between the sUAS and a ground station using the Robot Operating System (ROS) facilitates the execution of resource-intensive computational tasks such as real-time feature extraction using OpenCV libraries. The pose estimation algorithm harnesses these extracted features to approximate the vehicle’s pose, and this estimation is communicated to the vehicle via the ROS communication bridge. Preliminary results obtained from experimental implementation of the proposed methodology are discussed.
- Pages
- 7
- Citation
- Sarkar, S., Garcia, M., and Eubanks, B., "Estimating Unmanned Aerial System Pose Using Cast Shadows and Computer Vision Techniques," SAE Technical Paper 2024-01-1931, 2024, https://doi.org/10.4271/2024-01-1931.