This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous Driving Based on Camera and Radar
- Sihuang He - Hunan University, State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, China ,
- Chen Lin - Hunan University, State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, China ,
- Zhaohui Hu - Hunan University, State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, China
Journal Article
12-06-03-0022
ISSN: 2574-0741, e-ISSN: 2574-075X
Sector:
Topic:
Citation:
He, S., Lin, C., and Hu, Z., "A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous Driving Based on Camera and Radar," SAE Intl. J CAV 6(3):2023, https://doi.org/10.4271/12-06-03-0022.
Language:
English
Abstract:
Effective circumstance perception technology is the prerequisite for the
successful application of autonomous driving, especially the detection
technology of traffic objects that affects other tasks such as driving decisions
and motion execution in autonomous vehicles. However, recent studies show that a
single sensor cannot perceive the surrounding environment stably and effectively
in complex circumstances. In the article, we propose a multi-scale feature
fusion framework that exploits a dual backbone network to extract camera and
radar feature maps and performs feature fusion on three different feature scales
using a new fusion module. In addition, we introduce a new generation mechanism
of radar projection images and relabel the nuScenes dataset since there is no
other suitable autonomous driving dataset for model training and testing. The
experimental results show that the fusion models achieve superior accuracy over
visual image-based models on the evaluation criteria of PASCAL visual object
classes (VOC) and Common Objects in Context (COCO), about 2% over the baseline
model (YOLOX).