This content is not included in your SAE MOBILUS subscription, or you are not logged in.
Rain-Adaptive Intensity-Driven Object Detection for Autonomous Vehicles
Technical Paper
2020-01-0091
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Language:
English
Abstract
Deep learning based approaches for object detection are heavily dependent on the nature of data used for training, especially for vehicles driving in cluttered urban environments. Consequently, the performance of Convolutional Neural Network (CNN) architectures designed and trained using data captured under clear weather and favorable conditions, could degrade rather significantly when tested under cloudy and rainy conditions. This naturally becomes a major safety issue for emerging autonomous vehicle platforms relying on CNN based object detection methods. Furthermore, despite a noticeable progress in the development of advanced visual deraining algorithms, they still have inherent limitations for improving the performance of state-of-the-art object detection. In this paper, we address this problem area by make the following contributions. We systematically study and quantify the influence of a wide range of rain intensities on the performance of popular deep learning based object detection that is trained with clear visual data. We show that even low rain intensities could significantly degrade the performance of object detection trained using clear visuals. Subsequently, we propose a Rain-Adaptive Intensity-Driven (RAID) deep learning framework for object detection under a variety of rain intensities. Controlled experiments based on rain simulations, which are seamlessly integrated with real visual data captured by moving vehicles in truly cluttered urban environments, show the superiority of the proposed RAID framework as compared with state-of-the-art deraining methods in conjunction with popular deep learning based object detection.
Topic
Citation
Hnewa, M. and Radha, H., "Rain-Adaptive Intensity-Driven Object Detection for Autonomous Vehicles," SAE Technical Paper 2020-01-0091, 2020, https://doi.org/10.4271/2020-01-0091.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
[Unnamed Dataset 1] |
Also In
References
- Chen, C., Seff, A., Kornhauser, A., and Xiao, J. , “DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving,” IEEE ICCV 2722-2730, 2015.
- Wu, B., Iandola, F., Jin, P.H., and Keutzer, K. , “SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, 446-454.
- Teichmann, M., Weber, M., Zöllner, M., Cipolla, R., and Urtasun, R. , “MultiNet: Real-Time Joint Semantic Reasoning for Autonomous Driving,” in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, 1013-1020.
- Girshick, R., Donahue, J., Darrell, T., and Malik, J. , “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” IEEE CVPR 580-587, 2014.
- Girshick, R. , “Fast R-CNN,” CVPR 1440-1448, 2015.
- Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. , “You Only Look Once: Unified, Real-Time Object Detection,” CVPR 779-788, 2016.
- Cai, Z., Fan, Q., Feris, R.S., and Vasconcelos, N. , “A Unified Multi-Scale Deep Convolutional Neural Network for Fast Object Detection,” ECCV 2016 354-370, 2016.
- Ren, S., He, K., Girshick, R., and Sun, J. , “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence 39(6):1137-1149, Jun. 2017.
- Garg, K. and Nayar, S.K. , “Detection and Removal of Rain from Videos,” IEEE CVPR 2004 1:I-I, 2004.
- Zhang, X., Li, H., Qi, Y., Leow, W.K., and Ng, T.K. , “Rain Removal in Video by Combining Temporal and Chromatic Properties,” in 2006 IEEE International Conference on Multimedia and Expo, 2006, 461-464.
- Kang, L., Lin, C., and Fu, Y. , “Automatic Single-Image-Based Rain Streaks Removal Via Image Decomposition,” IEEE Transactions on Image Processing 21(4):1742-1755, Apr. 2012.
- Kim, J., Sim, J., and Kim, C. , “Video Deraining and Desnowing Using Temporal Correlation and Low-Rank Matrix Completion,” IEEE Transactions on Image Processing 24(9):2658-2670, Sep. 2015.
- Fu, X., Huang, J., Ding, X., Liao, Y., and Paisley, J. , “Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal,” IEEE Transactions on Image Processing 26(6):2944-2956, Jun. 2017.
- Jiang, T., Huang, T., Zhao, X., Deng, L., and Wang, Y. , “A Novel Tensor-Based Video Rain Streaks Removal Approach Via Utilizing Discriminatively Intrinsic Priors,” CVPR 2818-2827, 2017.
- Chen, J., Tan, C.-H., Hou, J., Chau, L.-P., and Li, H. , “Robust Video Content Alignment and Compensation for Rain Removal in a CNN Framework,” CVPR 6286-6295, 2018.
- Fu, X., Huang, J., Zeng, D., Huang, Y. et al. , “Removing Rain from Single Images Via a Deep Detail Network,” CVPR 1715-1723, 2017.
- Liu, J., Yang, W., Yang, S., and Guo, Z. , “Erase or Fill? Deep Joint Recurrent Rain Removal and Reconstruction in Videos,” CVPR 3233-3242, 2018.
- Wang, Z., Bovik, A.C., Sheikh, H.R., and Simoncelli, E.P. , “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Transactions on Image Processing 13(4):600-612, Apr. 2004.
- Brazil, G., Yin, X., and Liu, X. , “Illuminating Pedestrians Via Simultaneous Detection and Segmentation,” ICCV 4960-4969, 2017.
- Xu, D., Ouyang, W., Ricci, E., Wang, X., and Sebe, N. , “Learning Cross-Modal Deep Representations for Robust Pedestrian Detection,” CVPR 4236-4244, 2017.
- Li, J., Liang, X., Shen, S., Xu, T. et al. , “Scale-Aware Fast R-CNN for Pedestrian Detection,” IEEE Transactions on Multimedia 20(4):985-996, Apr. 2018.
- Ouyang, W., Zhou, H., Li, H., Li, Q. et al. , “Jointly Learning Deep Features, Deformable Parts, Occlusion and Classification for Pedestrian Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 40(8):1874-1887, Aug. 2018.
- Redmon, J. and Farhadi, A. , “YOLO9000: Better, Faster, Stronger,” arXiv preprint, 2017.
- Redmon, J. and Farhadi, A. , “Yolov3: An Incremental Improvement,” arXiv preprint arXiv:1804.02767, 2018.
- “The Udacity self-driving-car project,” https://github.com/udacity/self-driving-car, 2018.
- Adobe After Effects CC (Sab Jose, CA: Adobe Inc., 2018).
- Everingham, M. and Winn, J. , “The Pascal Visual Object Classes Challenge 2012 (voc2012) Development Kit,” Pattern Analysis, Statistical Modelling and Computational Learning, Tech. Rep., 2011.
- Cartucho , “mean Average Precision,” https://github.com/Cartucho/mAP, 2018.