A Concise Camera-Radar Fusion Framework for Object Detection and Data Association

2022-01-7097

12/22/2022

Features
Event
SAE 2022 Intelligent and Connected Vehicles Symposium
Authors Abstract
Content
Multi-sensor fusion strategies have gradually become a consensus in autonomous driving research. Among them, radar-camera fusion has attracted wide attention for its improvement on the dimension and accuracy of perception at a lower cost, however, the processing and association of radar and camera data has become an obstacle to related research. Our approach is to build a concise framework for camera and radar detection and data association: for visual object detection, the state-of-the-art YOLOv5 algorithm is further improved and works as the image detector, and before the fusion process, the raw radar reflection data is projected onto image plane and hierarchically clustered, then the projected radar echoes and image detection results are matched based on the Hungarian algorithm. Thus, the category of objects and their corresponding distance and speed information can be obtained, providing reliable input for subsequent object tracking task. Results shows that the fusion method greatly improves the perception dimension and accuracy of intelligent vehicles in adverse environments, its matching accuracy reaches 62.3% on the VTTI dataset and the camera-radar association process takes 0.013s per frame. All implementation is based on ROS (Robot Operation System) to facilitate the feasible application of algorithms.
Meta TagsDetails
DOI
https://doi.org/10.4271/2022-01-7097
Pages
9
Citation
He, Y., Zhao, J., Lyu, N., Li, L. et al., "A Concise Camera-Radar Fusion Framework for Object Detection and Data Association," SAE Technical Paper 2022-01-7097, 2022, https://doi.org/10.4271/2022-01-7097.
Additional Details
Publisher
Published
Dec 22, 2022
Product Code
2022-01-7097
Content Type
Technical Paper
Language
English