Vehicle Following Hybrid Control Algorithm Based on DRL and PID in Intelligent Network Environment
2022-01-7113
12/22/2022
- Features
- Event
- Content
- Deep reinforcement learning (DRL) has not been widely used in the engineering field yet because RL needs to be learned through ‘trial and error’, which makes the application of this kind of algorithm in real physical environment more difficult, and it is impossible to carry out ‘trial and error’ learning on real vehicles. By analyzing the motion state of the vehicle in the car following mode, the algorithm that combined traditional longitudinal motion control with DRL improves the safety of RL in the real physical environment and the poor adaptability of the traditional longitudinal motion control algorithm. In this paper, the longitudinal motion of the unmanned vehicle is taken as the research object, and the PID algorithm is combined with the Deep Deterministic Policy Gradient (DDPG) algorithm to control the longitudinal motion of the unmanned vehicle. The research results show that the longitudinal motion control hybrid algorithm performed better than the single PID algorithm or DDPG algorithm in the vehicle following control. This strategy establishes the relationship between the vehicle longitudinal control and the state of both the ego vehicle and the front vehicle, while also considers the randomness of the front vehicle motion in the iterative learning process, which improves the safety, comfort and following performance.
- Pages
- 11
- Citation
- Hu, B., Chen, J., Lin, Y., and Tan, S., "Vehicle Following Hybrid Control Algorithm Based on DRL and PID in Intelligent Network Environment," SAE Technical Paper 2022-01-7113, 2022, https://doi.org/10.4271/2022-01-7113.