This content is not included in your SAE MOBILUS subscription, or you are not logged in.
Lane Keeping Assist for an Autonomous Vehicle Based on Deep Reinforcement Learning
ISSN: 0148-7191, e-ISSN: 2688-3627
Published April 14, 2020 by SAE International in United States
This content contains downloadable datasetsAnnotation ability available
Lane keeping assist (LKA) is an autonomous driving technique that enables vehicles to travel along a desired line of lanes by adjusting the front steering angle. Reinforcement learning (RL) is one kind of machine learning. Agents or machines are not told how to act but instead learn from interaction with the environment. It also frees us from coding complex policies manually. But it has not yet been successfully applied to autonomous driving. Two control strategies using different deep reinforcement learning (DRL) algorithms have been proposed and used in the lane keeping assist scenario in this paper. Deep Q-network (DQN) algorithm with discrete action space and deep deterministic policy gradient (DDPG) algorithm with continuous action space have been implemented, respectively. Based on MATLAB/Simulink, deep neural networks representing the control policy are designed. The environment as well as the vehicle dynamics are also modelled in Simulink. By integrating the proposed control method and a vehicle dynamics model, the lane keeping assist simulation is performed. Experimental results demonstrate that the vehicle travel along the centerline of the path and the controller reaches a steady state after a short time, validating the effectiveness of the proposed control method.
CitationWang, Q., Zhuang, W., Wang, L., and Ju, F., "Lane Keeping Assist for an Autonomous Vehicle Based on Deep Reinforcement Learning," SAE Technical Paper 2020-01-0728, 2020, https://doi.org/10.4271/2020-01-0728.
Data Sets - Support Documents
|Unnamed Dataset 1|
|Unnamed Dataset 2|
- Yokoyama , K. , Iezawa , M. , Akashi , Y. , Satake , T. et al. Speed Control of Parking Assist System for Electrified Vehicle SAE Technical Paper 2015-01-0316 2015 https://doi.org/10.4271/2015-01-0316
- Yokoyama , K. , Iezawa , M. , Tanaka , H. , and Enoki , K. Development of Three-Motor Electric Vehicle “EMIRAI 2 xEV” SAE Int. J. Commer. Veh. 8 1 197 204 2015 https://doi.org/10.4271/2015-01-1597
- Falcone , P. , Borrelli , F. , Asgari , J. , Tseng , H.E. et al. Predictive Active Steering Control for Autonomous Vehicle Systems IEEE Transactions on Control Systems Technology https://doi.org/10.1109/TCST.2007.894653
- Singh , S. Design of Front Wheel Active Steering for Improved Vehicle Handling and Stability SAE Technical Paper 2000-01-1619 2000 https://doi.org/10.4271/2000-01-1619
- Mnih , K.K. , Silver , D. et al. 2013
- Mnih , V. et al. Human-Level Control through Deep Reinforcement Learning Nature 518 7540 529 2015
- Silver , D. , Huang , A. , Maddison , C.J. , Guez , A. et al. Mastering the Game of Go with Deep Neural Networks and Tree Search Nature 529 7587 484 2016
- Vinyals , O. , Babuschkin , I. , Chung , J. , Mathieu , M. et al. 2019
- Sutton , R.S. and Barto , A.G. Reinforcement Learning: An Introduction MIT Press 2018
- Sutton , R.S. Learning to Predict by the Methods of Temporal Differences Machine Learning 3 1 9 44 1988
- Bellman , R. A Markovian Decision Process Journal of Mathematics and Mechanics 679 684 1957
- Watkins , C.J. and Dayan , P. Q-Learning Machine Learning 8 3-4 279 292 1992
- Lillicrap , T.P. , Hunt , J.J. , Pritzel , A. , Heess , N. et al. 2015
- Lin , L.J. 1993