Low-Level Data Fusion between Camera and Automotive RADAR for Vehicle and Pedestrian Detection Using nuScenes Database

2024-36-0064

12/20/2024

Features
Event
SAE Brasil 2024 Congress
Authors Abstract
Content
Autonomous driving technology has indeed become a focal point of research globally, with significant efforts directed towards enhancing its key components: environment perception, vehicle localization, path planning, and motion control. These components work together to enable autonomous vehicles to navigate complex environments safely and efficiently. Among these components, environment perception stands out as critical, as it involves the robust, real-time detection of targets on the road. This process relies heavily on the integration of various sensors, making data fusion an indispensable tool in the early stages of automation. Sensor fusion between the camera and RADAR (Radio Detection and Ranging) has advantages because they are complementary sensors, where fusion combines the high lateral resolution from the vision system with the robustness in the face of adverse weather conditions and light invulnerability of RADAR, as well as having a lower production cost compared to the LiDAR (Light Detection and Ranging) sensor. Given the importance of sensor fusion for automated driving, this paper examines the low-level sensory fusion method that uses RADAR detection to generate Regions of Interest (ROIs) in the camera coordinate system. To do so, it was selected a fusion algorithm based on RRPN (Radar Region Proposal Network), which combines RADAR and camera data, and compared it to Faster R-CNN, which uses only camera data. Our goal was to study the advantages and limitations of the proposed method. We explored the NuScenes database to determine the best aspect ratios for different object sizes and modified the RRPN algorithm to generate more effective anchors. For training, we used camera and frontal RADAR data from the NuScenes database. The COCO dataset metrics under three different temporal conditions: day, night, and rain was used to evaluate the proposed models.
Meta TagsDetails
DOI
https://doi.org/10.4271/2024-36-0064
Pages
9
Citation
Cury, H., Teixeira, E., and Silva, R., "Low-Level Data Fusion between Camera and Automotive RADAR for Vehicle and Pedestrian Detection Using nuScenes Database," SAE Technical Paper 2024-36-0064, 2024, https://doi.org/10.4271/2024-36-0064.
Additional Details
Publisher
Published
Dec 20
Product Code
2024-36-0064
Content Type
Technical Paper
Language
English