In the pursuit of advancing autonomous vehicles (AVs), data-driven algorithms
have become pivotal in replacing human perception and decision-making. While
deep neural networks (DNNs) hold promise for perception tasks, the potential for
catastrophic consequences due to algorithmic flaws is concerning. A well-known
incident in 2016, involving a Tesla autopilot misidentifying a white truck as a
cloud, underscores the risks and security vulnerabilities. In this article, we
present a novel threat model and risk assessment (TARA) analysis on AV data
storage, delving into potential threats and damage scenarios. Specifically, we
focus on DNN parameter manipulation attacks, evaluating their impact on three
distinct algorithms for traffic sign classification and lane assist. Our
comprehensive tests and simulations reveal that even a single bit-flip of a DNN
parameter can severely degrade classification accuracy to less than 10%, posing
significant risks to the overall performance and safety of AVs. Additionally, we
identify critical parameters based on bit position, layer position, and
bit-flipping direction, offering essential insights for developing robust
security measures in autonomous vehicle systems.