Culture Clashes: LLM Support in the Engineering of Safety-Critical Systems

2026-01-0110

4/7/2026

Features
Event
Authors
Abstract
Content
The proven usefulness of large language models (LLMs) as tools for software development and the recent rapid increase in their capabilities have made it possible and attractive to extend their scope of application to almost all tasks in the engineering of complex and even safety-critical systems. While these tools promise substantial efficiency gains and improved engineering productivity, they remain prone to errors, and the generated artifacts may not meet the stringent quality requirements for safety-critical systems. In this paper, we systematically analyze potential applications of LLMs throughout the engineering lifecycle of safety-critical systems and identify associated risks as well as practical approaches to risk mitigation. We classify LLM-supported use cases according to LLM autonomy, impact, and artifact observability, and compare the corresponding mitigation strategies with established approaches used for traditional engineering automation. In addition, we examine the cultural and psychological aspects influencing trust in LLM-based engineering tools and the risks of both over-reliance and unwarranted rejection. Our analysis shows that LLMs can provide substantial benefits as engineering support tools, but they also represent a significant source of development risk if applied without appropriate safeguards. Based on these findings, we propose guidelines for responsibly using LLM-based tools in the engineering of safety-critical systems.
Meta TagsDetails
Citation
Thomas, C. and Wagner, M., "Culture Clashes: LLM Support in the Engineering of Safety-Critical Systems," WCX SAE World Congress Experience, Detroit, Michigan, United States, April 14, 2026, https://doi.org/10.4271/2026-01-0110.
Additional Details
Publisher
Published
Apr 07
Product Code
2026-01-0110
Content Type
Technical Paper
Language
English