This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Detection & Tracking of Multi-Scenic Lane Based on Segnet-LSTM Semantic Split Network

Journal Article
2021-01-0083
ISSN: 2641-9645, e-ISSN: 2641-9645
Published April 06, 2021 by SAE International in United States
Detection & Tracking of Multi-Scenic Lane Based on Segnet-LSTM Semantic Split Network
Sector:
Citation: Ye, M., Tan, G., Tang, J., Feng, J. et al., "Detection & Tracking of Multi-Scenic Lane Based on Segnet-LSTM Semantic Split Network," SAE Int. J. Adv. & Curr. Prac. in Mobility 3(5):2494-2500, 2021, https://doi.org/10.4271/2021-01-0083.
Language: English

Abstract:

Lane detection is an important component in automatic pilot system and advanced driving assistance system (ADAS). The stability and precision of lane detection will directly determine precision of control and lane plan of vehicles. Traditional mechanical vision lane detection approaches in complicated environment have the deficiencies of low precision and feature semantic description disabilities. But the lane detection depending on deep learning, e.g. SCNN network, LaneNet network, ENet-SAD network have imbalance problems of splitting precision and storage usage. This paper proposes an approach of high-efficiency deep learning Segnet-LSTM semantic segmentation network. This network structure is composed with encoding network and corresponding decoding networks. First, convolution and maximum pooling. The proposal extracts texture details of five images and stores searching position of maximum pooling. Meanwhile, it will implement interpolate processing to the lost points. Then, by up-sampling and convoluting with decoders and predicts category of a specific pixel with Softmax function. At the same time, it splits with long and short term memories of LSTM network and finally, the output is a complete image that enables detection and tracking of lanes in during the day and at night. Experiments have suggested that this approach provides more fitting precision and feature extraction precision in comparison with Unet-LSTM algorithm. This algorithm provide accuracy that is 3.724% higher than that of Unet-LSTM in the day in perspective of miou value and the precision of lane pixel categorization is 4.126% higher than that of Unet-LSTM. Besides, miou of this algorithm at night is 5.6% higher than that of Unet-LSTM and the precision of lane pixel categorization is 4.1398% higher. Moreover, the algorithm allows balance between storage involvement and precision of lane detection on the basis of high fitting precision, guaranteeing real-time performance and stability in various senses, assisting auto pilots in more efficiently.