This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
Reward Function Design via Human Knowledge Graph and Inverse Reinforcement Learning for Intelligent Driving
Technical Paper
2021-01-0180
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Event:
SAE WCX Digital Summit
Language:
English
Abstract
Motivated by applying artificial intelligence technology to the automobile industry, reinforcement learning is becoming more and more popular in the community of intelligent driving research. The reward function is one of the critical factors which affecting reinforcement learning. Its design principle is highly dependent on the features of the agent. The agent studied in this paper can do perception, decision-making, and motion-control, which aims to be the assistant or substitute for human driving in the latest future. Therefore, this paper analyzes the characteristics of excellent human driving behavior based on the six-layer model of driving scenarios and constructs it into a human knowledge graph. Furthermore, for highway pilot driving, the expert demo data is created, and the reward function is self-learned via inverse reinforcement learning. The reward function design method proposed in this paper has been verified in the Unity ML-Agent environment. The result shows that comparing with the traditional reward function design method, the driving policy trained by the newly designed reward function can better meet the human driving expectations.
Authors
Citation
Guo, R., Hong, Z., and Xue, X., "Reward Function Design via Human Knowledge Graph and Inverse Reinforcement Learning for Intelligent Driving," SAE Technical Paper 2021-01-0180, 2021, https://doi.org/10.4271/2021-01-0180.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 | ||
Unnamed Dataset 2 | ||
Unnamed Dataset 3 | ||
Unnamed Dataset 4 |
Also In
References
- SAE Standard J3016_201806
- Yurtsever , E. , Lambert , J. , Carballo , A. , and Takeda , K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies IEEE Access 8 58443 58469 2020 10.1109/ACCESS.2020.2983149
- Grigorescu , S. , Trasnea , B. , Cocias , T. , and Macesanu , G. A Survey of Deep Learning Techniques for Autonomous Driving J. Field Robot. 37 362 386 2020
- Tampuu , A. , Semikin , M. , Muhammad , N. , Fishman , D. , and Mathisen , T. 2020
- Li , C. and Czarnecki , K. Urban Driving with Multi-objective Ddeep Reinforcement Learning Proceedings of the 18th International Conference on Autonomous Agents and Multi Agent Systems. International Foundation for Autonomous Agents and Multi agent Systems 2019
- Wang , S. , Jia , D. , and Weng , X. Deep Reinforcement Learning for Autonomous Driving 2018
- Chen , J. , Yuan , B. , and Tomizuka , M. Model-free Deep Reinforcement Learning for Urban Autonomous Driving 2019
- Yurtsever , E. , Capito , L. , Redmill , K. , and Ozguner , U. Integrating Deep Reinforcement Learning with Model-based Path Planners for Automated Driving Proceedings of the 31st IEEE Intelligent Vehicles Symposium (IV’20) June 2020
- Dosovitskiy , A. , Ros , G. , Codevilla , F. , Lopez , A. , and Koltun , V. CARLA: An Open Urban Driving Simulator Proceedings of the 1st Annual Conference on Robot Learning 2017
- Min , K. , Kim , H. , and Huh , K. Deep Distributional Reinforcement Learning Based High-Level Driving Policy Determination IEEE Transactions on Intelligent Vehicles 4 3 416 424 Sept. 2019 10.1109/TIV.2019.2919467
- Sun , R. , Hu , S. , Zhao , H. , Moze , M. , Aioun , F. and Guillemard , F. Human-like Highway Trajectory Modeling based on Inverse Reinforcement Learning 2019 IEEE Intelligent Transportation Systems Conference (ITSC) Auckland, New Zealand 2019 1482 1489 10.1109/ITSC.2019.8916970
- Fu , J. , Luo , K. , and Levine , S. Learning Robust Rewards with Adversarial Inverse Reinforcement Learning International Conference on Learning Representations
- Sutton , R.S. and Barto , A.G. Reinforcement Learning: An Introduction Cambridge MIT Press 2018
- Weber , H. et al. A Framework for Definition of Logical Scenarios for Safety Assurance of Automated Driving Traffic Injury Prevention 20 S65 S70 2018
- Juliani , A. et al. Unity: A General Platform for Intelligent Agents 2018