This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy

Journal Article
ISSN: 2327-5626, e-ISSN: 2327-5634
Published April 03, 2018 by SAE International in United States
Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy
Citation: Terpstra, T., Dickinson, J., and Hashemian, A., "Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy," SAE Int. J. Trans. Safety 6(3):193-216, 2018,
Language: English


The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging. Off-road incidents, snow-covered roadways, rural areas, and unpaved roadways are examples where available scene features may be limited. Other factors like the number of photographs, the specific vantage of the photographs, and occlusion of recognizable features within these photographs can also limit the number of common features available for use in camera matching. In these instances, camera matching solutions can be improved by extending the 3D environment to include objects visible in the distance such as mountains, valleys, and other notable landmarks that are typically outside of the scope of 3D scene mapping. This article demonstrates a method for obtaining and using this elevation data in combination with 3D scene mapping for camera matching photogrammetry. Photogrammetric solutions with limited scene features are compared to photogrammetric solutions based on the same limited scene features with the addition of digital elevation models. Solution accuracies from both scenarios are then individually evaluated to demonstrate improvements through the use of elevation models. In this study, the incorporation of digital elevation modeling at a site with limited scene features demonstrates a 74% improvement for evidence located through camera matching photogrammetry. For further evaluation, the camera match solutions were compared in combined solutions, where information obtained from one camera match was used to inform the next. This was done for both the scenario with digital elevation models and the scenario without. The results demonstrate how the number of available photos can influence the overall accuracy of photogrammetry solutions.