Mathematical Modeling of Trust Calibration for Human–Automation Safety

2026-01-0530

04/07/2025

Authors
Abstract
Content
Trust between humans and automated vehicles is increasingly recognized as a pivotal factor in achieving safety in advanced vehicles and mobility environments. Poorly calibrated trust—either over-trust leading to complacency or under-trust leading to disengagement—has been identified in numerous incidents involving advanced driver assistance systems and autonomous functions. Yet, most current approaches to managing trust rely on surveys, heuristics, or qualitative assessments, providing little predictive capability for safety engineers and system designers. This paper presents a formal mathematical framework for trust calibration, modeling trust as a dynamic, evolving construct grounded in probabilistic reasoning. We represent trust as a Beta-distributed belief that updates over time through Bayesian inference, with parameters adjusted by observed successes and failures, contextual cues, and system feedback signals. The model also incorporates forgetting and drift functions, allowing trust to degrade or recover realistically in response to uncertainty and changing conditions. Together, these features provide a predictive and interpretable mechanism for quantifying trust trajectories during human–automation interaction. To demonstrate feasibility, the model is applied to simulated driver–automation interaction scenarios in which takeover readiness, compliance with system recommendations, and reliance behaviors are measured. Results show that the framework captures the nonlinear evolution of trust, successfully predicting both the escalation of over-trust after repeated positive outcomes and the erosion of trust after failures or warning cues. Furthermore, thresholds can be identified where interventions—such as adaptive alerts or transparency enhancements—should be triggered to prevent unsafe reliance patterns. By advancing trust calibration from descriptive characterization to predictive modeling, this work offers a novel pathway to integrate trust management into advanced safety technologies. Embedding mathematically grounded trust models into vehicle systems holds promise for safer, more resilient human–automation partnerships and provides actionable tools for engineers designing next-generation mobility solutions.
Meta TagsDetails
Citation
Wen, He and Adil Mounir, "Mathematical Modeling of Trust Calibration for Human–Automation Safety," SAE Technical Paper 2026-01-0530, 2025-, .
Additional Details
Publisher
Published
Apr 7, 2025
Product Code
2026-01-0530
Content Type
Technical Paper
Language
English