CANet: A Lightweight Cross-Attention Guided Network for Unstructured Road Segmentation

2025-01-7309

12/31/2025

Authors
Abstract
Content
To address the issues of multiple background interferences and blurred road boundaries in unstructured scene road segmentation tasks, a lightweight and precise unstructured road segmentation model based on cross-attention (CANet) is proposed. This model constructs an encoder using the lightweight neural network MobileNetV2. By doing so, it ensures light weight while enhancing the feature discrimination ability of unstructured roads, thus achieving efficient feature extraction. The decoder integrates the cross-attention mechanism and a low-level feature fusion branch. The attention mechanism improves the model’s perception of road boundaries by capturing long-distance context information in the feature map, thereby solving the problem of blurred edges. The low-level feature fusion branch enhances the detail accuracy and edge continuity of the segmentation results by incorporating high-resolution information from shallow features. Experimental results show that the proposed model attains a category intersection over union of 93.28% for unstructured road segmentation and an average intersection over union of 95.36% for the overall unstructured scene, with only 3.7M parameters, demonstrating higher accuracy compared to other models. The proposed solution enables real-time perception for autonomous driving systems.
Meta TagsDetails
Pages
13
Citation
Wang, Xuewei, Guangyuan Cao, Xiao Liang, and Shaohua Li, "CANet: A Lightweight Cross-Attention Guided Network for Unstructured Road Segmentation," SAE Technical Paper 2025-01-7309, 2025-, .
Additional Details
Publisher
Published
9 hours ago
Product Code
2025-01-7309
Content Type
Technical Paper
Language
English