This content is not included in your SAE MOBILUS subscription, or you are not logged in.
A Framework for Vision-Based Lane Line Detection in Adverse Weather Conditions Using Vehicle-to-Infrastructure (V2I) Communication
ISSN: 0148-7191, e-ISSN: 2688-3627
Published April 02, 2019 by SAE International in United States
Annotation ability available
Lane line detection is a very critical element for Advanced Driver Assistance Systems (ADAS). Although, there has been significant amount of research dedicated to the detection and localization of lane lines in the past decade, there is still a gap in the robustness of the implemented systems. A major challenge to the existing lane line detection algorithms stems from coping with bad weather conditions (e.g. rain, snow, fog, haze, etc.). Snow offers an especially challenging environment, where lane marks and road boundaries are completely covered by snow. In these scenarios, on-board sensors such as cameras, LiDAR, and radars are of very limited benefit. In this research, the focus is on solving the problem of improving robustness of lane line detection in adverse weather conditions, especially snow. A framework is proposed that relies on using Vehicle-to-Infrastructure (V2I) communication to access reference images stored in the cloud. These reference images were captured at approximately the same geographical location when visibility was clear and weather conditions were good. The reference images are used to detect and localize lane lines. The proposed framework then uses image registration techniques to align both the sensed image (adverse weather) and the reference image. Once the two images are aligned, the lane line information from the reference image is then superimposed on the local map built by the ADAS or Autonomous driving system. A real-world experiment is designed to evaluate the error in localizing the lane lines using the proposed framework in comparison to ground truth data. The measurements and evaluations are based on data gathered from a test vehicle. The vehicle is equipped with a monocular camera, forward looking radar, LiDAR, and GPS/IMU. The initial results show good potential for improving upon current state-of-the art approaches used in today’s automotive industry.
CitationHorani, M. and Rawashdeh, O., "A Framework for Vision-Based Lane Line Detection in Adverse Weather Conditions Using Vehicle-to-Infrastructure (V2I) Communication," SAE Technical Paper 2019-01-0684, 2019, https://doi.org/10.4271/2019-01-0684.
- Boban, M., Kousaridas, A., Manolakis, K., Eichinger, J. et al., “Use Cases, Requirements, and Design Considerations for 5G V2X,” IEEE Vehicular Technology Magazine, Dec. 5, 2017.
- Hillel, A., Lerner, R., Levi, D., Raz, G. et al., “Recent Progress in Road and Lane Detection: A Survey,” Machine Vision and Applications 25(3):727-745, n.d., doi:10.1007/s00138-011-0404-2.
- Miller, I., Campbell, M., and Huttenlocher, D., “Map-Aided Localization in Sparse Global Positioning System Environments Using Vision and Particle Filtering,” Journal of Field Robotics 28(5):619-643, n.d., doi:10.1002/rob.20395.
- Matthaei, R., Bagschik, G., and Maurer, M., “Map-Relative Localization in Lane-Level Maps for ADAS and Autonomous Driving,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE, IEEE, n.d., 49-55, doi:10.1109/IVS.2014.6856428.
- IEEE 1609 - Family of standards for Wireless Access in Vehicular Environments (WAVE), U.S. Department of Transportation, Apr. 13, 2013.
- Vägverket, Jan. 2014, http://publikationswebbutik.vv.se/upload/1723/88325_safe_traffic_vision_zero_on_the_move.pdf.
- Nair, D. and Sankaran, P., “Color Image Dehazing Using Surround Filter and Dark Channel Prior,” Journal of Visual Communication and Image Representation 50:9-15, n.d., doi:10.1016/j.jvcir.2017.11.005.
- Wang, J., He, N., and Lu, K., “A New Single Image Dehazing Method with MSRCR Algorithm,” in Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, ACM, Vol. 2015, n.d., 1-4, doi:10.1145/2808492.2808511.
- Lowe, D., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision 60(2):91-110, n.d., doi:10.1023/B:VISI.0000029664.99615.94.
- Zitová, B. and Flusser, J., “Image Registration Methods: A Survey,” Image and Vision Computing 21(11):977-1000, 2003, doi:10.1016/S0262-8856(03)00137-9.
- Horani, M., Al-Refai, G., and Rawashdeh, O., “Towards Video Sharing in Vehicle-to-Vehicle and Vehicle-to-Infrastructure for Road Safety,” Vol. 2017, n.d.. doi:10.4271/2017-01-0076.
- Wang, Y., Teoh, E., and Shen, D., “Lane Detection and Tracking Using B-Snake,” Image and Vision Computing 22(4):269-280, 2004, doi:10.1016/j.imavis.2003.10.003.
- Gonzalez, R.C. and Woods, R.E., Digital Image Processing (Pearson, 2018).
- Aponso, A. and Krishnarajah, N., “Review on State of Art Image Enhancement and Restoration Methods for a Vision Based Driver Assistance System with De-Weathering,” in Soft Computing and Pattern Recognition (SoCPaR), 2011 International Conference of, IEEE Publishing, n.d., 135-140, doi:10.1109/SoCPaR.2011.6089128.