This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
A Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning
Technical Paper
2020-01-5154
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Language:
English
Abstract
At present, with the development of Intelligent Vehicle Infrastructure Cooperative Systems (IVICS), the decision-making for automated vehicle based on connected environment conditions has attracted more attentions. Reliability, efficiency and generalization performance are the basic requirements for the vehicle decision-making system. Therefore, this paper proposed a decision-making method for connected autonomous driving based on Wasserstein Generative Adversarial Nets-Deep Deterministic Policy Gradient (WGAIL-DDPG) algorithm. In which, the key components for reinforcement learning (RL) model, reward function, is designed from the aspect of vehicle serviceability, such as safety, ride comfort and handling stability. To reduce the complexity of the proposed model, an imitation learning strategy is introduced to improve the RL training process. Meanwhile, the model training strategy based on cloud computing effectively solves the problem of insufficient computing resources of the vehicle-mounted system. Test results show that the proposed method can improve the efficiency for RL training process with reliable decision making performance and reveals excellent generalization capability.
Authors
Topic
Citation
Zhang, M., Wan, X., Lv, X., and Wu, Z., "A Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning," SAE Technical Paper 2020-01-5154, 2020, https://doi.org/10.4271/2020-01-5154.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 |
Also In
References
- China-SAE Technology Roadmap for Energy Saving and New Energy Vehicles Beijing Mechanical Industry Press 2016 9787111550815
- Guangming , X. , Yong , L. , and Shiyuan , W. Behavior Prediction and Control Method Based on FSM for Intelligent Vehicles in an Intersection Transactions of Beijing Institute of Technology 35 1 34 38 2015 10.15918/j.tbit1001-0645.2015.01.007
- Xuemei , C. , Geng , T. , Yisong , M. et al. Driving Rule Acquisition and Decision Algorithm to Unmanned Vehicle in Urban Traffic Transactions of Beijing Institute of Technology 37 5 491 496 2017 10.15918/j.tbit1001-0645.2017.05.010
- Pomerleau , D. An Autonomous Land Vehicle in a Neural Network Advances in Neural Information Processing Systems Morgan Kaufmann Publishers Inc. 1989 1
- Xu , H. , Gao , Y. , Yu , F. et al. End-To-End Learning of Driving Models from Large-Scale Video Datasets Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017 2174 2182 10.1109/CVPR.2017.376
- Wei , X. , and Huiyun , L. Training Method of Automated Driving Strategy Based on Deep Reinforcement Learning Journal of Integration Technology 6 3 29 40 2017 10.3969/j.issn.2095-3135.2017.03.003
- Bai , Z. , Shangguan , W. , Cai , B. et al. Deep Reinforcement Learning Based High-Level Driving Behavior Decision-Making Model in Heterogeneous Traffic 2019 Chinese Control Conference (CCC) 2019 8600 8605 10.23919/ChiCC.2019.8866005
- Zhu , M. , Wang , X. , and Wang , Y. Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning Transportation research part C: emerging technologies 97 348 368 2018 10.1016/j.trc.2018.10.024
- Gao , H. , Shi , G. , Wang , K. , Su , H. et al. Research on Decision-Making of Autonomous Vehicle Following Based on Reinforcement Learning Method Industrial Robot: The International Journal of Robotics Research and Application 2019 10.1108/IR-07-2018-0154
- Zong , X. , Xu , G. , Yu , G. et al. Obstacle Avoidance for Self-Driving Vehicle with Reinforcement Learning SAE Int. J. Passeng. Cars - Electron. Electr. Syst. 11 1 30 39 2017 https://doi.org/10.4271/2017-01-1960
- Anderson , C.W. , Lee , M. , and Elliott , D.L. Faster Reinforcement Learning after Pretraining Deep Networks to Predict State Dynamics 2015 International Joint Conference on Neural Networks (IJCNN) 2015 1 7 10.1109/IJCNN.2015.7280824
- Xinping , G. , Yunpeng , H. , and Junfu , Y. Vehicle Lane-Changing Decision Model Based on Decision Mechanism and Support Vector Machine Journal of Harbin Institute of Technology 52 07 111 121 2020 10.11918/201905142