Semantic Segmentation of Lane Lines for Pix2Pix Network Introducing Ghost Module
2025-01-7208
02/21/2025
- Features
- Event
- Content
- Technology for lane line semantic segmentation is crucial for ensuring the safe operation of intelligent cars. Intelligent cars can now comprehend the distribution and meaning of scenes in an image more precisely thanks to semantic segmentation, which calls for a certain degree of accuracy and real-time network performance. A lightweight module is selected, and two previous models are improved and fused to create the lane line detection model. Finally, experiments are conducted to confirm the model's efficacy. This paper proposes a lightweight replacement program with the aim of addressing the issue of large parameterization in the generative adversarial network (GAN) model and difficult training convergence. The overall network structure is selected from the Pix2Pix network in the conditional generative adversarial network, and the U-net network of the generator is cut and replaced by the Ghost Module, which consists of a modified downsampling module that enhances the global fusion ability of the lane line features during the encoding and decoding process so as to reduce the loss of the lane line features and reduce the size of the network parameters while ensuring a certain degree of accuracy. The VIL-100 dataset is used for training, and the experimental results show that it also has good performance compared to other algorithms. In an effort to evaluate the detection effect of the model further, validation testing is conducted using both the CULane dataset and online road images. The test results demonstrate improved lane line segmentation and meet expectations.
- Pages
- 11
- Citation
- Yang, K., and Wang, J., "Semantic Segmentation of Lane Lines for Pix2Pix Network Introducing Ghost Module," SAE Technical Paper 2025-01-7208, 2025, https://doi.org/10.4271/2025-01-7208.