With the advancement of intelligent driving technology, today’s smart vehicles
must not only make accurate and safe driving decisions but also exhibit high
human-likeness to ensure better acceptance from people. Developing vehicle
behavior models with increased human-likeness has become a significant industry
focus. However, existing vehicle behavior models often struggle to balance
human-likeness and interpretability. While some researchers use inverse
reinforcement learning (IRL) to model vehicle behavior, ensuring both
human-likeness and a degree of interpretability, challenges such as reward
function design difficulties and low human-likeness in background vehicle
modeling persist. This study addresses these issues by focusing on highway
scenarios without on-ramps, specifically following and lane-changing behaviors,
using the CitySim dataset. IRL is employed to create a vehicle behavior model
with improved human-likeness, utilizing a linear reward function to capture
driver decision-making motives. Building on prior research, this study further
explores various feature combinations for the reward function and introduces new
features. The final feature combination resulted in a 12.6% and 14.4% reduction
in planning errors on the training and test sets, respectively, compared to the
baseline method. Additionally, the study enhances background vehicle modeling
methods based on the Intelligent Driver Model (IDM) and the Minimizing Overall
Braking Induced by Lane-change (MOBIL) model by adding traffic flow and patience
correction terms. The results show that the improved background vehicle modeling
method reduced test set errors by 4.3%, demonstrating greater human-likeness and
making it more suitable for simulation environments.