This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
Analysis of Illumination Condition Effect on Vehicle Detection in Photo-Realistic Virtual World
Technical Paper
2017-01-1998
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Language:
English
Abstract
Intelligent driving, aimed for collision avoidance and self-navigation, is mainly based on environmental sensing via radar, lidar and/or camera. While each of the sensors has its own unique pros and cons, camera is especially good at object detection, recognition and tracking. However, unpredictable environmental illumination can potentially cause misdetection or false detection. To investigate the influence of illumination conditions on detection algorithms, we reproduced various illumination intensities in a photo-realistic virtual world, which leverages recent progress in computer graphics, and verified vehicle detection effect there. In the virtual world, the environmental illumination is controlled precisely from low to high to simulate different illumination conditions in the driving scenarios (with relative luminous intensity from 0.01 to 400). Sedan cars with different colors are modelled in the virtual world and used for detection task. Faster RCNN and You Only Look Once (YOLO), which are the object detection neural networks with high accuracy and efficiency, were chosen for experiments. Results show that: (1) vehicle under too high illumination condition can hardly be detected; (2) as the illumination intensity adjusted from 0.01 to 400, the detection confidences of red and blue cars are higher than other colors, the detection confidence deviations of red and blue cars are also small, which means they are robust to the variation of illumination. This work can provide some insights not only on future autonomous vehicle design, but also on future on-board camera design.
Authors
Topic
Citation
Yang, S., Deng, W., Liu, Z., and Wang, Y., "Analysis of Illumination Condition Effect on Vehicle Detection in Photo-Realistic Virtual World," SAE Technical Paper 2017-01-1998, 2017, https://doi.org/10.4271/2017-01-1998.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 | ||
Unnamed Dataset 2 |
Also In
References
- Singhvi , A. , and Russell K. Inside the self-driving Tesla fatal accident The New York Times 2016
- Girshick R , Donahue J , Darrell T et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C] Proceedings of the IEEE conference on computer vision and pattern recognition 2014 580 587
- Girshick R. Fast r-cnn[C] Proceedings of the IEEE International Conference on Computer Vision 2015 1440 1448
- Ren S , He K , Girshick R et al. Faster r-cnn: Towards real-time object detection with region proposal networks[C] Advances in neural information processing systems 2015 91 99
- Jifeng Dai K. H. J. S. , Li Yi R-FCN: Object detection via region-based fully convolutional networks arXiv preprint arXiv:1605.06409 2016
- He K , Gkioxari G , Dollár P et al. Mask r-cnn[J] arXiv preprint arXiv:1703.06870 2017
- Redmon J , Farhadi A YOLO9000: Better, Faster, Stronger[J] arXiv preprint arXiv:1612.08242 2016
- Liu W , Anguelov D , Erhan D et al. SSD: Single shot multibox detector[C] European Conference on Computer Vision Springer International Publishing 2016 21 37
- Mikolajczyk K , Schmid C A performance evaluation of local descriptors[J] IEEE transactions on pattern analysis and machine intelligence 2005 27 10 1615 1630
- Schmid C , Mohr R , Bauckhage C Evaluation of interest point detectors[J] International Journal of computer vision 2000 37 2 151 172
- Winder S A J , Brown M Learning local image descriptors[C] Computer Vision and Pattern Recognition, 2007 CVPR'07. IEEE Conference on. IEEE 2007 1 8
- Winder S , Hua G , Brown M Picking the best daisy[C] Computer Vision and Pattern Recognition, 2009 CVPR 2009. IEEE Conference on. IEEE 2009 178 185
- Marin J , Vázquez D , Gerónimo D et al. Learning appearance in virtual scenarios for pedestrian detection[C] Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE 2010 137 144
- Xu J , Vázquez D , López A M et al. Learning a part-based pedestrian detector in a virtual world[J] IEEE Transactions on Intelligent Transportation Systems 2014 15 5 2121 2131
- Hattori H , Naresh Boddeti V , Kitani K M et al. Learning scene-specific pedestrian detectors without real data[C] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015 3819 3827
- Cordts M , Omran M , Ramos S et al. The cityscapes dataset for semantic urban scene understanding[C] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016 3213 3223
- Ros G , Sellart L , Materzynska J et al. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes[C] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016 3234 3243
- Karis , Brian Real shading in unreal engine 4 Proc. ACM SIGGRAPH Courses 2013 22