An Improved Dueling Double Deep Q Network Algorithm and Its Application to the Optimized Path Planning for Unmanned Ground Vehicle
2023-01-7065
12/20/2023
- Features
- Event
- Content
- The traditional Double Deep Q-Network (DDQN) algorithm suffers from slow convergence and instability when dealing with complex environments. Besides, it is often susceptible to getting stuck in a local optimal solution and may fail to discover the optimal strategy. As a result, Unmanned Ground Vehicle (UGV) cannot search for the optimal path. To address these issues, the study presents an Improved Dueling Double Deep Q Network (ID3QN) algorithm, which adopts dynamic ε-greed strategy, priority experience replay (PER) and Dueling DQN structure. Where, UGV solves the problem of insufficient exploration and overexploitation according to the dynamic ε-greed strategy. Moreover, high-priority experience examples are extracted using the priority experience replay approach. Meanwhile, the Dueling DQN method can effectively manage the relationship between state values and dominance values. According to the experiment’s accomplishments, the ID3QN method outperforms the DDQN approach in terms of stability and rate of convergence, and obtains a better path in UGV path planning.
- Pages
- 7
- Citation
- He, Z., Pang, H., Bai, Z., Zheng, L. et al., "An Improved Dueling Double Deep Q Network Algorithm and Its Application to the Optimized Path Planning for Unmanned Ground Vehicle," SAE Technical Paper 2023-01-7065, 2023, https://doi.org/10.4271/2023-01-7065.