Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios

2023-01-7038

12/20/2023

Features
Event
SAE 2023 Intelligent and Connected Vehicles Symposium
Authors Abstract
Content
The results of monocular depth estimation are no satisfactory in the automatic driving scenario. The combination of radar and camera for depth estimation is a feasible solution to the problem of depth estimation in similar scenes. The radar-camera pixel depth association model establishes a reliable correlation between radar depth and camera pixel. In this paper, a new depth estimation model named Deep-PDANet based on RC-PDA is proposed, which increases the depth and width of the network and alleviates the problem of network degradation through residual structure. Convolution kernels of different sizes are selected in the basic units to further improve the ability to extract global information while taking into account the extraction of information from a single pixel. The convergence speed and learning ability of the network are improved by the training strategy of multi-weight loss function in stages. In this paper, comparison experiments and ablation study were performed on the NuScenes dataset, and the accuracy of the multidimensional model was improved over the baseline model, which exceeded the existing excellent algorithms.
Meta TagsDetails
DOI
https://doi.org/10.4271/2023-01-7038
Pages
8
Citation
Ai, W., Ma, Z., and Zheng, L., "Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios," SAE Technical Paper 2023-01-7038, 2023, https://doi.org/10.4271/2023-01-7038.
Additional Details
Publisher
Published
Dec 20, 2023
Product Code
2023-01-7038
Content Type
Technical Paper
Language
English