Real-time Parameter Optimization for Eco-driving Control in Connected and Automated Vehicles Using Reinforcement Learning

2026-01-0041

04/07/2025

Authors
Abstract
Content
This paper introduces a novel methodology to enhance the energy efficiency of eco-driving controllers in Connected and Automated Vehicles (CAVs) by leveraging Reinforcement Learning (RL) techniques for real-time parameter optimization. Traditional eco-driving strategies rely on fixed control parameters, which limit adaptability across diverse traffic and road conditions. To address this, we apply continuous action space RL algorithms, specifically Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO), to dynamically tune four key parameters within a model predictive control framework that is grounded in Pontryagin’s Maximum Principle (PMP). These parameters influence acceleration, braking, cruising, and intersection-approach behaviors, making them critical for achieving optimal eco-driving performance. Our study employs Argonne National Laboratory’s RoadRunner simulator, a Simulink-based environment designed for high-fidelity CAV analysis, incorporating realistic traffic signals, road gradients, and vehicle interactions. RL agents are trained to interpret vehicle states, road attributes, and traffic light information to adjust control parameters in real time. This integration enables the controller to anticipate and respond to dynamic driving scenarios, thereby improving both energy efficiency and operational robustness. Simulation experiments across multiple driving scenarios demonstrate that the RL-enhanced eco-driving controller achieves substantial energy savings without compromising travel time. On average, our approach surpasses a baseline eco-driving controller without RL by 12% and outperforms a high-fidelity human driver model by 24.2% in terms of energy consumption reduction. These results highlight the potential of continuous action space RL to advance real-time eco-driving control in CAVs. Overall, this work provides a pathway toward more intelligent, adaptive, and sustainable vehicle control systems that can accelerate the deployment of energy-efficient mobility solutions.
Meta TagsDetails
Citation
Zhang, Yaozhong et al., "Real-time Parameter Optimization for Eco-driving Control in Connected and Automated Vehicles Using Reinforcement Learning," SAE Technical Paper 2026-01-0041, 2025-, .
Additional Details
Publisher
Published
Apr 7, 2025
Product Code
2026-01-0041
Content Type
Technical Paper
Language
English