This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
A Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learning
Technical Paper
2021-01-0434
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Event:
SAE WCX Digital Summit
Language:
English
Abstract
As connectivity and sensing technologies become more mature, automated vehicles can predict future driving situations and utilize this information to drive more energy-efficiently than human-driven vehicles. However, future information beyond the limited connectivity and sensing range is difficult to predict and utilize, limiting the energy-saving potential of energy-efficient driving. Thus, we combine a conventional speed optimization planner, developed in our previous work, and reinforcement learning to propose a real-time intelligent speed optimization planner for connected and automated vehicles. We briefly summarize the conventional speed optimization planner with limited information, based on closed-form energy-optimal solutions, and present its multiple parameters that determine reference speed trajectories. Then, we use a deep reinforcement learning (DRL) algorithm, such as a deep Q-learning algorithm, to find the policy of how to adjust these parameters in real-time to dynamically changing situations in order to realize the full potential of energy-efficient driving. The model-free DRL algorithm, based on the experience of the system, can learn the optimal policy through iteratively interacting with different driving scenarios without increasing the limited connectivity and sensing range. The training process of the parameter adaptation policy exploits a high-fidelity simulation framework that can simulate multiple vehicles with full powertrain models and the interactions between vehicles and their environment. We consider intersection-approaching scenarios where there is one traffic light with different signal phase and timing setup. Results show that the learned optimal policy enables the proposed intelligent speed optimization planner to properly adjust the parameters in a piecewise constant manner, leading to additional energy savings without increasing total travel time compared to the conventional speed optimization planner.
Authors
Topic
Citation
Lee, W., Han, J., Zhang, Y., Karbowski, D. et al., "A Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learning," SAE Technical Paper 2021-01-0434, 2021, https://doi.org/10.4271/2021-01-0434.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 | ||
Unnamed Dataset 2 | ||
Unnamed Dataset 3 |
Also In
References
- Vahidi , A. , and Sciarretta , A. Energy Saving Potentials of Connected and Automated Vehicles Transportation Research Part C: Emerging Technologies 95 822 843 Oct. 2018
- Guanetti , J. , Kim , Y. , and Borrelli , F. Control of Connected and Automated Vehicles: State of the Art and Future Challenges Annual Reviews in Control 45 18 40 2018
- Yu , K. , Yang , J. , and Yamaguchi , D. Model Predictive Control for Hybrid Vehicle Ecological Driving Using Traffic Signal and Road Slope Information Control Theory and Technology 13 1 17 28 Feb. 2015
- Dollar , R.A. , and Vahidi , A. Efficient and Collision-Free Anticipative Cruise Control in Randomly Mixed Strings IEEE Transactions on Intelligent Vehicles 3 4 439 452 Dec. 2018
- Kim , N. , Lee , D. , Zheng , C. , Shin , C. et al. Realization of PMP-Based Control for Hybrid Electric Vehicles in a Backward-Looking Simulation International Journal of Automotive Technology, (in English) 15 4 625 635 Jun 2014
- Lee , W. , Jeoung , H. , Park , D. , and Kim , N. An Adaptive Concept of PMP-Based Control for Saving Operating Costs of Extended-Range Electric Vehicles IEEE Transactions on Vehicular Technology 68 12 11505 11512 Dec. 2019
- Han , J. , Vahidi , A. , and Sciarretta , A. Fundamentals of Energy Efficient Driving for Combustion Engine and Electric Vehicles: An Optimal Control Perspective Automatica 103 558 572 May 2019
- Malikopoulos , A.A. , Cassandras , C.G. , and Zhang , Y.J. A Decentralized Energy-Optimal Control Framework for Connected Automated Vehicles at Signal-Free Intersections Automatica 93 244 256 Jul. 2018
- Han , J. , Sciarretta , A. , Ojeda , L.L. , De Nunzio , G. , and Thibault , L. Safe- and Eco-Driving Control for Connected and Automated Electric Vehicles Using Analytical State-Constrained Optimal Solution IEEE Transactions on Intelligent Vehicles 3 2 163 172 Jun. 2018
- Lee , H. , Song , C. , Kim , N. , and Cha , S.W. Comparative Analysis of Energy Management Strategies for HEV: Dynamic Programming and Reinforcement Learning IEEE Access 8 67112 67123 2020
- Lee , H. , Kang , C. , Park , Y. , Kim , N. , and Cha , S.W. Online Data-Driven Energy Management of a Hybrid Electric Vehicle Using Model-Based Q-Learning IEEE Access 8 84444 84454 2020
- Liu , X. , Liu , Y. , Chen , Y. , and Hanzo , L. Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach IEEE Transactions on Vehicular Technology 69 8 8329 8342 Aug. 2020
- Walraven , E. , Spaan , M.T. , and Bakker , B. Traffic Flow Optimization: A Reinforcement Learning Approach Engineering Applications of Artificial Intelligence 52 203 212 2016
- Xu , B. , Rathod , D. , Zhang , D. , Yebi , A. et al. Parametric Study on Reinforcement Learning Optimized Energy Management Strategy for a Hybrid Electric Vehicle Applied Energy 259 114200 2020
- Han , J. , Karbowski , D. , and Kim , N. Closed-Form Solutions for a Real-Time Energy-Optimal and Collision-Free Speed Planner with Limited Information 2020 American Control Conference (ACC), Denver, CO, USA 268 275 2020 10.23919/ACC45564.2020.9147382
- Kim , N. , Karbowski , D. , and Rousseau , A. A Modeling Framework for Connectivity and Automation Co-simulation SAE Technical Paper 2018-01-0607 Apr. 2018 https://doi.org/10.4271/2018-01-0607
- Han , J. , Karbowski , D. , Kim , N. , and Rousseau , A. Human Driver Modeling Based on Analytical Optimal Solutions: Stopping Behaviors at the Intersections ASME 2019 Dynamic Systems and Control Conference Park City, Utah, USA ASME Oct. 2019
- Argonne National Laboratory 2017 www.autonomie.net
- Goodfellow , I. , Bengio , Y. , and Courville , A. Deep Learning Cambridge, MA, USA MIT Press 2016 http://www.deeplearningbook.org
- Cashman , D. , Patterson , G. , Mosca , A. , and Chang , R. RNNbow: Visualizing Learning via Backpropagation Gradients in Recurrent Neural Networks Proc. Workshop Visual Analytics Deep Learn 2017
- Han , A. , Lee , W. , Karbowski , D. , Rousseau , A. , and Kim , N. Fine-Tuning a Real-Time Speed Planner for Eco-Driving of Connected and Automated Vehicles presented at the IEEE Vehicular Power and Propulsion Conference (VPPC) 2020, Gijón, Spain Oct. 2020