Mathematical Modeling of Trust Calibration for Human–Automation Safety

2026-01-0530

4/7/2026

Features
Event
Authors
Abstract
Content
Trust calibration is vital for safe human–automation interaction but remains largely qualitative. This study develops multiple quantitative frameworks modeling trust as a function of automation reliability. Four progressive models of binary, linear, triangular, and logistic formalize the calibrated trust zone, defining where human reliance aligns with system performance. The framework corrects major misconceptions: that trust is purely qualitative, that low trust–low reliability states are acceptable, and that overtrust and distrust pose equal risk. It establishes a minimum reliability threshold for meaningful trust and identifies distrust as the safer default in high-risk contexts. A case study on an empirical observation of 32 AI applications plotted in the trust–reliability space confirms the analysis, revealing a consistent distrust tendency where reliability exceeds user confidence and other observations. By quantifying trust through reliability, the study reframes it as a controllable safety variable, enabling predictive calibration and adaptive, trust-aware safety architectures for reliable human–AI collaboration.
Meta TagsDetails
Citation
Wen, H. and Mounir, A., "Mathematical Modeling of Trust Calibration for Human–Automation Safety," WCX SAE World Congress Experience, Detroit, Michigan, United States, April 14, 2026, https://doi.org/10.4271/2026-01-0530.
Additional Details
Publisher
Published
Apr 07
Product Code
2026-01-0530
Content Type
Technical Paper
Language
English