A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous Driving Based on Camera and Radar

Features
Authors Abstract
Content
Effective circumstance perception technology is the prerequisite for the successful application of autonomous driving, especially the detection technology of traffic objects that affects other tasks such as driving decisions and motion execution in autonomous vehicles. However, recent studies show that a single sensor cannot perceive the surrounding environment stably and effectively in complex circumstances. In the article, we propose a multi-scale feature fusion framework that exploits a dual backbone network to extract camera and radar feature maps and performs feature fusion on three different feature scales using a new fusion module. In addition, we introduce a new generation mechanism of radar projection images and relabel the nuScenes dataset since there is no other suitable autonomous driving dataset for model training and testing. The experimental results show that the fusion models achieve superior accuracy over visual image-based models on the evaluation criteria of PASCAL visual object classes (VOC) and Common Objects in Context (COCO), about 2% over the baseline model (YOLOX).
Meta TagsDetails
DOI
https://doi.org/10.4271/12-06-03-0022
Pages
12
Citation
He, S., Lin, C., and Hu, Z., "A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous Driving Based on Camera and Radar," SAE Int. J. CAV 6(3):333-343, 2023, https://doi.org/10.4271/12-06-03-0022.
Additional Details
Publisher
Published
Mar 10, 2023
Product Code
12-06-03-0022
Content Type
Journal Article
Language
English