GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection

Features
Event
SAE 2023 Intelligent Urban Air Mobility Symposium
Authors Abstract
Content
The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we propose the Multi-Modal Fusion Module (MMFM), which aligns and fuses features from graphs, radar pseudo-images generated from Pillars, and camera images within a geometric space. We validate our model using the View-of-Delft (VoD) dataset. Experimental results demonstrate that the proposed method efficiently fuses camera and 4D radar features, resulting in enhanced 3D detection performance.
Meta TagsDetails
DOI
https://doi.org/10.4271/2023-01-7088
Pages
7
Citation
Fan, L., Zeng, C., Li, Y., Wang, X. et al., "GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection," SAE Int. J. Adv. & Curr. Prac. in Mobility 6(5):2690-2696, 2024, https://doi.org/10.4271/2023-01-7088.
Additional Details
Publisher
Published
Dec 31, 2023
Product Code
2023-01-7088
Content Type
Journal Article
Language
English