Toward Explainability in Urban Motion Prediction—Survey and Outlook

Features
Authors Abstract
Content
With the influx of artificial intelligence (AI) models aiding the development of autonomous driving (AD), it has become increasingly important to analyze and categorize aspects of their operation. In conjunction with the high predictive power innate to AI solutions, due to the safety requirements inherent to automotive systems and the demands for transparency imposed by legislature, there is a natural demand for explainable and predictable models. In this work, we explore the various strategies that reveal the inner workings of these models at various component levels, focusing on those adapted at the modeling stage. Specifically, we highlight and review the use of explainability in state-of-the-art AI-based scenario understanding and motion prediction methods, which represent an integral part of any AD system. We break the discussion down across three key axes that are inherent to any AI solution: the data, the model architecture, and the loss optimization. For each of the axes, we outline the general methodologies for introducing explainability, and reference and review some practical realizations for each methodology. We conclude the article by identifying several strategies that we believe are yet to be fully explored, such as physics-inspired machine learning methods, neural network pretraining, graph neural networks designed using domain-specific priors, and end-to-end trainable networks based on differentiable kinematic models.
Meta TagsDetails
DOI
https://doi.org/10.4271/12-08-01-0009
Pages
15
Citation
Okanovic, I., Stolz, M., and Hillbrand, B., "Toward Explainability in Urban Motion Prediction—Survey and Outlook," SAE Int. J. CAV 8(1), 2025, https://doi.org/10.4271/12-08-01-0009.
Additional Details
Publisher
Published
Aug 24
Product Code
12-08-01-0009
Content Type
Journal Article
Language
English