A Decentralized Multi-agent Energy Management Strategy Based on a Look-Ahead Reinforcement Learning Approach

Journal Article
14-11-02-0012
ISSN: 2691-3747, e-ISSN: 2691-3755
Published November 05, 2021 by SAE International in United States
A Decentralized Multi-agent Energy Management Strategy Based on a
                    Look-Ahead Reinforcement Learning Approach
Citation: Khalatbarisoltani, A., Kandidayeni, M., Boulon, L., and Hu, X., "A Decentralized Multi-agent Energy Management Strategy Based on a Look-Ahead Reinforcement Learning Approach," SAE Int. J. Elec. Veh. 11(2):151-164, 2022, https://doi.org/10.4271/14-11-02-0012.
Language: English

Abstract:

An energy management strategy (EMS) has an essential role in ameliorating the efficiency and lifetime of the powertrain components in a hybrid fuel cell vehicle (HFCV). The EMS of intelligent HFCVs is equipped with advanced data-driven techniques to efficiently distribute the power flow among the power sources, which have heterogeneous energetic characteristics. Decentralized EMSs provide higher modularity (plug and play) and reliability compared to the centralized data-driven strategies. Modularity is the specification that promotes the discovery of new components in a powertrain system without the need for reconfiguration. Hence, this article puts forward a decentralized reinforcement learning (Dec-RL) framework for designing an EMS in a heavy-duty HFCV. The studied powertrain is composed of two parallel fuel cell systems (FCSs) and a battery pack. The contribution of the suggested multi-agent approach lies in the development of a fully decentralized learning strategy composed of several connected local modules. The performance of the proposed approach is investigated through several simulations and experimental tests. The results indicate the advantage of the established Dec-RL control scheme in convergence speed and optimization criteria.