With the widespread development of automated driving systems (ADS), it is
imperative that standardized testing methodologies be developed to assure safety
and functionality. Scenario testing evaluates the behavior of an ADS-equipped
subject vehicle (SV) in predefined driving scenarios. This paper compares four
modes of performing such tests: closed-course testing with real actors,
closed-course testing with surrogate actors, simulation testing, and
closed-course testing with mixed reality. In a collaboration between the
Waterloo Intelligent Systems Engineering (WISE) Lab and AAA, six automated
driving scenario tests were executed on a closed course, in simulation, and in
mixed reality. These tests involved the University of Waterloo’s automated
vehicle, dubbed the “UW Moose”, as the SV, as well as pedestrians, other
vehicles, and road debris. Drawing on both data and the experience gained from
executing these test scenarios, the paper reports on the advantages and
disadvantages of the four scenario testing modes, and compares them using eight
criteria. It also identifies several possible implementations of mixed-reality
scenario testing, including different strategies for data mixing. The paper
closes with twelve recommendations for choosing among the four modes.