Development of an Autonomous Vehicle Control Strategy Using a Single Camera and Deep Neural Networks

2018-01-0035

04/03/2018

Event
WCX World Congress Experience
Authors Abstract
Content
Autonomous vehicle development has benefited from sanctioned competitions dating back to the original 2004 DARPA Grand Challenge. Since these competitions, fully autonomous vehicles have become much closer to significant real-world use with the majority of research focused on reliability, safety and cost reduction. Our research details the recent challenges experienced at the 2017 Self Racing Cars event where a team of international Udacity students worked together over a 6 week period, from team selection to race day. The team’s goal was to provide real-time vehicle control of steering, braking, and throttle through an end-to-end deep neural network. Multiple architectures were tested and used including convolutional neural networks (CNN) and recurrent neural networks (RNN). We began our work by modifying a Udacity driving simulator to collect data and develop training models which we implemented and trained on a laptop GPU. Then, in the two days between car delivery and the start of the competition, a customized neural network using Keras and Tensorflow was developed. The deep learning network algorithm predicted car steering angles using a single front-facing camera. Training and deployment on the vehicle was completed using two GTX 1070s since a cloud GPU computing instance was neither available nor feasible. Using the proposed methods and working within the competition’s strict requirements, we completed several semi-autonomous laps and the team remained competitive. The results of the competition indicated that autonomous vehicle command and control can be achieved in a limited form using a single-camera with a short engineering development timeline. This approach lacks robustness and reliability and therefore, a semantic segmentation network was developed using feature extraction from the YOLOv2 network and the CamVid dataset with a correction for the unbalanced occurrence of the different classes. Currently 31 classes can be reliably detected and classified allowing for a more complex and robust decision making architecture.
Meta TagsDetails
DOI
https://doi.org/10.4271/2018-01-0035
Pages
8
Citation
Navarro, A., Joerdening, J., Khalil, R., Brown, A. et al., "Development of an Autonomous Vehicle Control Strategy Using a Single Camera and Deep Neural Networks," SAE Technical Paper 2018-01-0035, 2018, https://doi.org/10.4271/2018-01-0035.
Additional Details
Publisher
Published
Apr 3, 2018
Product Code
2018-01-0035
Content Type
Technical Paper
Language
English