Optimization of Variable Approach Lanes Using DDPG Under Real-Time Traffic Conditions
2025-99-0115
To be published on 11/11/2025
- Content
- To better match lane usage with changing traffic needs at intersections, this study proposes a method that uses deep reinforcement learning to optimize variable guidance lanes. We apply the DDPG algorithm and introduce a feature weight adjustment mechanism that changes in real time. It reacts to key traffic indicators such as vehicle flow, average delay, and peak delay. This helps the model respond more flexibly and improves its ability to handle different situations. To make the output actions easier to manage, we revise the sigmoid function used for discretization. The reward function is also designed carefully, aiming to keep lane changes smooth and stable. We test our method in a SUMO-simulated intersection. The results show that it outperforms both fixed lane strategies and standard DDPG models. It reduces delays, lowers queue lengths, and moves more traffic through the intersection, proving its value in real-world-like settings.
- Citation
- Zhang, W., and Zhang, F., "Optimization of Variable Approach Lanes Using DDPG Under Real-Time Traffic Conditions," SAE Technical Paper 2025-99-0115, 2025, .