This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
Boosted Deep Neural Network with Weighted Output Layers
Technical Paper
2017-01-1997
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Language:
English
Abstract
Vision based driving environment perception is current research hotspot in automatic driving field, which has made great progress due to the continuous breakthroughs in the research of deep neural network. As is well known, deep neural network has won tremendous successes in a wide variety of image recognition tasks, such as pedestrian detection and vehicle identification, which have accomplished the commercialization successfully in intelligent monitor system. Nevertheless, driving environment perception has a higher request for the generalization performance of deep neural network, which needs further studies on its design and training methods.
In this paper, we presented a new boosted deep neural network in order to improve its generalization performance and meanwhile keep computational budget constant. Above all, the most representative methods to improve the generalization performance of deep neural network were introduced. Next, we analyzed the merits and demerits of these methods under limited training samples and computation resources. Then we described a new boosted deep neural network with weighted output layers. On one hand, there are several output layers that constitute sequential classifiers, which boost the final performance of presented deep neural network. On the other hand, it saves the computation consumption through sharing partial network structure among the classifiers. Our proposed model improves the generalization performance and avoids excessively increasing computing at the same time. Finally, we made experiments to confirm the effectiveness of our model.
Authors
Topic
Citation
Hua, C., "Boosted Deep Neural Network with Weighted Output Layers," SAE Technical Paper 2017-01-1997, 2017, https://doi.org/10.4271/2017-01-1997.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 |
Also In
References
- Srivastava , N. , Hinton , G. , Krizhevsky , A. , Sutskever , I. , and Salakhutdinov , R. Dropout: A simple way to prevent neural networks from over fitting Journal of Machine Learning Research 15 1 1929 1958 Jan 2014
- Freund , Y. , Schapire , E. Experiments with a new boosting algorithm Proceedings of the Thirteenth International Conference in Machine Learning 148 156 1996
- Viola , p. , Jones , M. Rapid Object Detection using a Boosted Cascade of Simple Features IEEE Conference on Computer Vision and Pattern Recognition 1247 1259 2011
- Erhan , D. , Bengio , Y. , Courville , A. Why. Does Unsupervised Pre-training Help Deep Learning? Journal of Machine Learning Research 11 1 625 660 Feb 2010
- Krizhevsky , A. , Sutskever , I. , Geoff Hinton , G. Imagenet classification with deep convolutional neural networks In Advances in Neural Information Processing Systems 1106 1114 2012
- Christian , S. , Wei , L. , Yangqing , J. , Pierre S. , Reed , S. , Anguelov , D. , Erhan , D. , Vanhoucke , V. , Rabinovich , A. Going deeper with convolutions 2015 IEEE Conference on Computer Vision and Pattern Recognition 1-9, 2015
- Deng , J. , Berg , A. , Li , K. , Fei-Fei , L. What does classifying more than 10,000 image categories tell us? In ECCV, 2010, Part V 71 84 2010
- Sánchez , J. , Perronnin , F. High-dimensional signature compression for large-scale image classification 2011 IEEE Conference on Computer Vision and Pattern Recognition 1665 1672 2011