A Study of Using a Reinforcement Learning Method to Improve Fuel Consumption of a Connected Vehicle with Signal Phase and Timing Data

2020-01-0888

04/14/2020

Features
Event
WCX SAE World Congress Experience
Authors Abstract
Content
Connected and automated vehicles (CAVs) promise to reshape two areas of the mobility industry: the transportation and driving experience. The connected feature of the vehicle uses communication protocols to provide awareness of the surrounding world while the automated feature uses technology to minimize driver dependency. Constituting a subset of connected technologies, vehicle-to-infrastructure (V2I) technologies provide vehicles with real-time traffic light information, or Signal Phase and Timing (SPaT) data. In this paper, the vehicle and SPaT data are combined with a reinforcement learning (RL) method as an effort to minimize the vehicle’s energy consumption. Specifically, this paper explores the implementation of the deep deterministic policy gradient (DDPG) algorithm. As an off-policy approach, DDPG utilizes the maximum Q-value for the state regardless of the previous action performed. In this research, the SPaT data collected from dedicated short-range communication (DSRC) hardware installed at 16 real traffic lights is utilized in a simulated road modeled after a road in Tuscaloosa, Alabama. The vehicle is trained using DDPG with the SPaT data which then determines the optimal action to take in order to minimize the energy consumption at each traffic light.
Meta TagsDetails
DOI
https://doi.org/10.4271/2020-01-0888
Pages
6
Citation
Phan, A., and Yoon, H., "A Study of Using a Reinforcement Learning Method to Improve Fuel Consumption of a Connected Vehicle with Signal Phase and Timing Data," SAE Technical Paper 2020-01-0888, 2020, https://doi.org/10.4271/2020-01-0888.
Additional Details
Publisher
Published
Apr 14, 2020
Product Code
2020-01-0888
Content Type
Technical Paper
Language
English