Culture Clashes: LLM Support in the Engineering of Safety-Critical Systems

2026-01-0110

To be published on 04/07/2026

Authors
Abstract
Content
The proven usefulness of large language models (LLMs) as tools for software development and the recent rapid increase in their capabilities have made it possible and attractive to extend their scope of application to almost all tasks in the engineering of complex and even safety-critical systems. Although their application may be economically attractive, LLMs are prone to errors and the generated engineering artifacts may not meet the stringent quality requirements for safety-critical systems. In our paper, we systematically analyze the various potential uses of LLM in the engineering lifecycle of safety-critical systems and identify associated risks and approaches to risk mitigation. We classify these uses and compare them with approaches to mitigating the risks associated with traditional design automation. In addition, we address cultural and psychological aspects of the trust placed in LLMs as engineering tools and reliance on their results. Our analysis shows that, despite their current popularity as engineering support tools, LLMs also represent a significant source of error and a development risk when used without appropriate engineering culture safeguards. To this end, we propose requirements and guidelines for the use of LLMs as support tools in the engineering of safety-critical systems.
Meta TagsDetails
Citation
Thomas, Carsten and Michael Wagner, "Culture Clashes: LLM Support in the Engineering of Safety-Critical Systems," SAE Technical Paper 2026-01-0110, 2026-, .
Additional Details
Publisher
Published
To be published on Apr 7, 2026
Product Code
2026-01-0110
Content Type
Technical Paper
Language
English