Open Source LLM Performance in Automating Embedded C Software Quality Improvements for Automotive

2026-01-0103

04/07/2025

Authors
Abstract
Content
The study presented in this paper explored the potential of five open-source Large Language Models (LLMs) with parameter counts between 30 billion and 50 billion to automate enhancements in code quality and developer productivity. The evaluated models - CodeLlama, Command-R, Deepseek R1, Nemotron, and QwQ - were assessed on their ability to refactor a large and complex automotive mechatronic C language function. This assessment focused on adherence to provided code quality standards and successful compilation of the refactored function within its original code module. The evaluation also compared the impact of parameter count, hyperparameter tuning, model architecture, and fine-tuning. This comparison revealed that larger models generally outperformed smaller models against the provided quality standards. Additionally, hyperparameter tuning yielded a moderate improvement in performance. The study further highlighted that model architecture and fine-tuning had less predictable effects, suggesting further exploration is required. Furthermore, all models struggled to effectively apply certain formatting rules from the provided standards while excelling at others, and often produced code containing critical logical issues such as uninitialized variable use, excessive placeholders, and missing logic. This paper provides insights into patterns and behaviors observed in the study related to the strengths and weaknesses demonstrated by these open-source LLMs.
Meta TagsDetails
Citation
Struck, Daniel and Samanth Kumaraswamy, "Open Source LLM Performance in Automating Embedded C Software Quality Improvements for Automotive," SAE Technical Paper 2026-01-0103, 2025-, .
Additional Details
Publisher
Published
Apr 7, 2025
Product Code
2026-01-0103
Content Type
Technical Paper
Language
English