Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems

2021-01-0195

04/06/2021

Event
SAE WCX Digital Summit
Authors Abstract
Content
Replacing a human driver is an extraordinarily complex task. While machine learning (ML) and its’ subset, deep learning (DL) are fueling breakthroughs in everything from consumer mobile applications to image and gesture recognition, significant challenges remain. The majority of artificial intelligence (AI) learning applications, particularly with respect to Highly Automated Vehicles (HAVs) and their ecosystem have remained opaque - genuine “black boxes.” Data is loaded into one side of the ML system and results come out the other, however, there is little to no understanding at how the decision was arrived at.
To make these systems accurate, these AI systems require lots of data to crunch and the sheer computational complexity of building these DL based AI models also slows down the progress in accuracy and the practicality of deploying DL at scale. In addition, the training times and the forensic decision investigation — often measured in days, sometimes weeks and months — slows down implementation and makes traditional agile approaches with their definition of done almost impossible to follow.
Recent breakthroughs have allowed ML systems in a HAV implementation context to determine reasonable solutions in very fixed scenarios. However, these systems are typically very complex and largely incapable of explaining how or why they came up with that solution. Without this knowledge and reasoning, intervention and proof of compliance during HAV development, validation, verification, and production applications is near impossible. To cut development and forensic time it takes to create and understand DL models with high precision, decisions must be understood, and reasoning applied.
While significant breakthroughs have been made in Explainable AI (XAI) through DL technologies such as recursive methods, and Cognitive AI (CAI) through user interfaces (UI), they all commonly fail at “transparency”. Transparency is the ability to have access to the logic behind a decision made by a ML system. This is a requirement to establishing trust in high risk and high human cost applications such as an HAV. This paper will outline how a solution based on Knowledge Representation and Reasoning (KRR) creates a “holistic AI” approach that enables both knowledge on how a HAV machine learning system arrives at decisions, and provides the rational or reasoning through the provisioning of new insights into what would typically be a blind process. This “Transparent AI” solution will be explored through an algorithmic approach and then demonstrated through a software implementation within Baidu’s Apollo model framework.
Meta TagsDetails
DOI
https://doi.org/10.4271/2021-01-0195
Pages
19
Citation
Minarcin, M., "Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems," SAE Technical Paper 2021-01-0195, 2021, https://doi.org/10.4271/2021-01-0195.
Additional Details
Publisher
Published
Apr 6, 2021
Product Code
2021-01-0195
Content Type
Technical Paper
Language
English