Browse Topic: Photogrammetry

Items (114)
Photogrammetry is a commonly used type of analysis in accident reconstruction. It allows the location of physical evidence, as shown in photographs and video, and the position and orientation of vehicles, other road users, and objects to be quantified. Lens distortion is an important consideration when using photogrammetry. Failure to account for lens distortion can result in inaccurate spatial measurements, particularly when elements of interest are located toward the edges and corners of images. Depending on whether the camera properties are known or unknown, various methods for removing lens distortion are commonly used in photogrammetric analysis. However, many of these methods assume that lens distortion is the result of a spherical lens or, more rarely, is solely due to distortion caused by other known lens types and has not been altered algorithmically by the camera. Today, several cameras on the market algorithmically alter images before saving them. These camera systems use
Pittman, KathleenMockensturm, EricBuckman, TaylorWhite, Kirsten
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software includes these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software 3ds Max to determine its accuracy for use in accident reconstruction. A parking lot was scanned using a FARO LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment, and photographs were taken at various times throughout the day from the same location. This environment was 3D modeled in 3ds Max based on the point cloud, and the sun system in 3ds Max was configured using the
Barreiro, EvanErickson, MichaelSmith, ConnorCarter, NealHashemian, Alireza
Tesla Model 3 and Model Y vehicles come equipped with a standard dashcam feature with the ability to record video in multiple directions. Front, side, and rear views were readily available via direct USB download. Additional types of front and side views were indirectly available via privacy requests with Tesla. Prior research neither fully explored the four most readily available camera views across multiple vehicles nor field camera calibration techniques particularly useful for future software and hardware changes. Moving GPS instrumented vehicles were captured traveling approximately 7.2 kph to 20.4 kph across the front, side, and rear views available via direct USB download. Reverse project photogrammetry projects and video timing data successfully measured vehicle speeds with an average error of 2.45% across 25 tests. Previously researched front and rear camera calibration parameters were reaffirmed despite software changes, and additional parameters for the side cameras
Jorgensen, MichaelSwinford, ScottImada, KevinFarhat, Ali
Camera matching photogrammetry is widely used in the field of accident reconstruction for mapping accident scenes, modeling vehicle damage from post collision photographs, analyzing sight lines, and video tracking. A critical aspect of camera matching photogrammetry is determining the focal length and Field of View (FOV) of the photograph being analyzed. The intent of this research is to analyze the accuracy of the metadata reported focal length and FOV. The FOV from photographs captured by over 20 different cameras of various makes, models, sensor sizes, and focal lengths will be measured using a controlled and repeatable testing methodology. The difference in measured FOV versus reported FOV will be presented and analyzed. This research will provide analysts with a dataset showing the possible error in metadata reported FOV. Analysts should consider the metadata reported FOV as a starting point for photogrammetric analysis and understand that the FOV calculated from the image
Smith, Connor A.Erickson, MichaelHashemian, Alireza
This paper introduces a method to solve the instantaneous speed and acceleration of a vehicle from one or more sources of video evidence by using optimization to determine the best fit speed profile that tracks the measured path of a vehicle through a scene. Mathematical optimization is the process of seeking the variables that drive an objective function to some optimal value, usually a minimum, subject to constraints on the variables. In the video analysis problem, the analyst is seeking a speed profile that tracks measured vehicle positions over time. Measured positions and observations in the video constrain the vehicle’s motion and can be used to determine the vehicle’s instantaneous speed and acceleration. The variables are the vehicle’s initial speed and an unknown number of periods of approximately constant acceleration. Optimization can be used to determine the speed profile that minimizes the total error between the vehicle’s calculated distance traveled at each measured
Snyder, SeanCallahan, MichaelWilhelm, ChristopherJohnk, ChrisLowi, AlvinBretting, Gerald
Testing aircraft antennas is challenging since optimal tests are made after antenna installation. Aircraft are often taken to anechoic antenna test facilities which create long lead times, transportation hassle, and very high costs. Portable alternatives exist but often have compromised testing fidelity. Innovators at the NASA Glenn Research Center have developed the PLGRM system, which allows an installed antenna to be characterized in an aircraft hangar. All PLGRM components can be packed onto pallets, shipped, and easily operated.
Video of an event recorded from a moving camera contains information not only useful for reconstructing the locations and timing of an event, but also the velocity of the camera attached to the moving object or vehicle. Determining the velocity of a video camera recording from a moving vehicle is useful for determining the vehicle’s velocity and can be compared with speeds calculated through other reconstruction methods, or to data from vehicle speed monitoring devices. After tracking the video, the positions and speeds of other objects within the video can also be determined. Video tracking analysis traditionally has required a site inspection to map the three-dimensional environment. In instances where there have been significant site changes, where there is limited or no site access, and where budgeting and timing constraints exist, a three-dimensional environment can be created using publicly available aerial imagery and aerial LiDAR. This paper presents a methodology for creating
Terpstra, TobyMcDonough, SeanHelms, EthanBeier, StevenHessell, David
Creating a 3-dimensional environment using imagery from small unmanned aerial systems (sUAS, or unmanned aerial vehicles -UAVs, or colloquially, drones) has grown in popularity recently in accident reconstruction. In this process, ground control points are placed at an accident scene and an sUAS is flown over an accident site and a series of overlapping, high resolution images are taken of the site. Those images and ground control points are then loaded onto a computer and processed using photogrammetric software to create a 3-dimensional point cloud or mesh of the site, which then can be used as a tool for recreating an accident scene. Many software packages have been created to perform these tasks, and in this paper, the authors examine RealityCapture, a newer photogrammetric software, to evaluate its accuracy for the use in accident reconstruction. It is the authors’ experience that RealityCapture may at times produce point clouds with less noise that other software packages. To do
Barreiro, EvanCarter, Neal
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software have begun to include these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software Blender to determine its accuracy for use in accident reconstruction. A parking lot was scanned using Faro LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment and photographs were taken at various times throughout the day from the same location in the environment. This environment was then 3D modeled in Blender based on the point cloud, and the sun system
Barreiro, EvanCarter, NealHashemian, Alireza
Optical Image Stabilization (OIS) is a technology used in cameras and camcorders to reduce blur and shaky images or videos caused by unintentional camera movements. The primary goal of OIS is to counteract motion and maintain the stability of the image being captured, resulting in clearer, sharper, and more stable photos and videos. PhotoModeler, a photogrammetry software, advises users to turn off OIS on their cameras. Since the iPhone 7, OIS has become standard on all iPhones and cannot be deactivated. When calibrating an iPhone camera for photogrammetry, the OIS affects the calibration project's marking residual. In photogrammetry and 3D modeling terminology, "marking residual" typically refers to the difference between the observed image points and the corresponding points predicted by the photogrammetric process and refers to pixels. In other words, it represents the error between the actual image measurements and the values calculated by the photogrammetric algorithm. Because of
Neal, JosephLeipold, TaraPetroskey, Karla
The 3D crush model can be obtained by any suitable photogrammetry method using this image set and is intended to graphically represent in photographs the shape and orientation of the damaged surface(s) relative to the undamaged, or least damaged, portion of the vehicle. The procedure is intended to provide an image set sufficient to determine, with the use of photogrammetric methodologies, the 3D location of points on the crushed surface of the damaged vehicle. Measurement of the exterior damaged surface(s) on a vehicle is a necessary step in quantifying the deformation caused by a collision and the energy dissipated by the deformation process. The energy analysis is sometimes called a crush analysis. Evaluation of the energy dissipated is useful in reconstructing the change in the velocity of the vehicles (delta-V) involved in a collision. This guideline is intended for use by investigators who do not have photogrammetry expertise, special equipment or training and may be constrained
Crash Data Collection and Analysis Standards Committee
A new spatial calibration procedure has been introduced for infrared optical systems developed for cases where camera systems are required to be focused at distances beyond 100 meters. Army Combat Capabilities Development Command Armaments Center, Picatinny Arsenal, NJ All commercially available camera systems have lenses (and internal geometries) that cannot perfectly refract light waves and refocus them onto a two-dimensional (2D) image sensor. This means that all digital images contain elements of distortion and thus are not a true representation of the real world. Expensive high-fidelity lenses may have little measurable distortion, but if sufficient distortion is present, it will adversely affect photogrammetric measurements made from the images produced by these systems. This is true regardless of the type of camera system, whether it be a daylight camera, infrared (IR) camera, or camera sensitive to another part of the electromagnetic spectrum. The most common examples of large
All commercially available camera systems have lenses (and internal geometries) that cannot perfectly refract light waves and refocus them onto a two-dimensional (2D) image sensor. This means that all digital images contain elements of distortion and thus are not a true representation of the real world. Expensive high-fidelity lenses may have little measurable distortion, but if sufficient distortion is present, it will adversely affect photogrammetric measurements made from the images produced by these systems. This is true regardless of the type of camera system, whether it be a daylight camera, infrared (IR) camera, or camera sensitive to another part of the electromagnetic spectrum.
In the scope of development or certification processes for the flight under known icing conditions, aircraft have to be tested in icing wind tunnels under relevant conditions. The documentation of these tests has to be performed at a high level of detail. The generated data is used to prove the functionality of the systems, to develop new systems and for scientific purposes, for example the development or validation of numerical tools for ice accretion simulation. One way of documenting the resulting ice geometry is the application of an optical 3D scanning or reconstruction method. This work investigates and reviews optical methods for three-dimensional reconstructions of objects and the application of these methods in ice accretion documentation with respect to their potential of time resolved measurement. Laboratory tests are performed for time-of flight reconstruction of ice geometries and the application of optical photogrammetry with and without multi-light approach. The results
Neubauer, ThomasKozomara, DavidPuffing, ReinhardTeufl, Luca
Aircraft icing is an important subject for investigation due to its critical effects on flight performance. Ice accretion analysis is commonly carried out using computational tools, from which parameters such as the mean ice shape and roughness characteristics can be obtained, as these parameters have a strong effect on the physics of aerodynamics and ice accretion. Hence, the accurate digitization of a generated ice shape through ice measurement techniques is of crucial importance. This study aimed to validate the use of photogrammetry for measurement of ice geometries and roughness on UAV airfoils, by comparing it with the cast-and-mold method. Two test cases, one mixed and second rime ice, were analyzed, each case with three subcases varying in the number of photographs used. For test case 1, mixed ice, photogrammetry method resulted in an underestimation of mean ice height by 0.5 mm in the smooth zone and overestimation by 0.2 mm and 0.6 mm on the pressure and suction sides
Baghel, Anadika PaulSotomayor-Zakharov, DenisKnop, InkenOrtwein, Hans-Peter
Centrifugal Pendulum Vibration Absorber (CPVA for short) is used to absorb torsional vibrations caused by the shifting motion of the engine. It is increasingly used in modern powertrains. In the research of the dynamic characteristics of the CPVA, it is necessary to obtain the real motion of the pendulum to compensate the fitting performance of mathematical model. The usual method is to install an angle sensor to measure the movement of the pendulum. On the one hand, the installation of the sensor will affect its movement to a certain extent, so that the measurement results do not match the actual motion. On the other hand, the motion of the pendulum is not only the rotational motion around the rotational axis of the CPVA rotor, but also has translation relative to it. As a result, it is difficult to obtain accurate motion only by the angle sensor. We proposed a non-contact centrifugal pendulum motion measurement method. A high-speed camera is used to photograph the motion of the CPVA
Li, WeijunWu, GuangqiangZhang, Yi
Recent Tesla models contain four integrated onboard cameras that serve the Autopilot and Self-Driving Capabilities of the vehicle and act as a dashcam by recording footage to a local USB drive. The purpose of this study is to analyze the footage recorded by the integrated cameras and determine its suitability for speed determinations of both the host vehicle and surrounding vehicles through photogrammetry analyses. The front and rear cameras of the test vehicle (2019 Tesla Model 3) were calibrated for focal length and lens distortion characteristics. Two types of tests were performed to determine host vehicle speed: constant-speed and acceleration. Several frames from each test were analyzed. The distance between camera locations was used to gather vehicle speed through a time distance analysis. These speeds were compared to those gathered via the onboard GPS instrumentation. Two additional types of tests were performed to determine surrounding vehicle speeds: a vehicle approaching
Molnar, Benjamin T.Peck, Louis R.
NASA researchers have developed a compact, cost-effective imaging system using a co-linear, high-intensity LED illumination unit to minimize window reflections for background-oriented schlieren (BOS) and machine vision measurements. The imaging system tested in NASA wind tunnels can reduce or eliminate shadows that occur when using many existing BOS and photogrammetric measurement systems; these shadows occur in existing systems for a variety of reasons, including the severe back-reflections from wind tunnel viewing port windows and variations in the refractive index of the imaged volume.
Rising electric scooter popularity has seen a surge in electric scooter crashes. Crash reconstructionists increasingly have access to global positioning system (GPS) data for micromobile vehicle trips, and GPS devices can produce a wealth of data about cyclists’, scooterists’, and other riders’ road paths and route usage. However, prior research has demonstrated that GPS positional accuracy is less reliable for more nuanced roadway positioning, such as which lane a vehicle occupies, as well as within-lane movements, such as acceleration and deceleration⁠. This limitation presents a challenge for crash reconstructionists that may have access to GPS data and require second-by-second positional accuracy to determine such nuanced maneuvers and vehicle positioning in their analysis. The purpose of this study was to explore the positional accuracy of five GPS units for a micromobile vehicle during three different ride conditions: acceleration, deceleration, and constant speed. The same
Engleman, KrystinaVega, HenrySuway, JeffreyDesai, Elvis
Aerial photoscanning is a software-based photogrammetry method for obtaining three-dimensional site data. Ground Control Points (GCPs) are commonly used as part of this process. These control points are traditionally placed within the site and then captured in aerial photographs from a drone. They are used to establish scale and orientation throughout the resulting point cloud. There are different types of GCPs, and their positions are established or documented using different technologies. Some systems include satellite-based Global Positioning System (GPS) sensors which record the position of the control points at the scene. Other methods include mapping in the control point locations using LiDAR based technology such as a total station or a laser scanner. This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks
Terpstra, TobyMckelvey, NathanKing, EricHashemian, AlirezaKing, Charles
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation. The resulting data is comprised of millions of three-dimensional data points commonly referred to as a point cloud. The accuracy and reliability of these point clouds is dependent on hardware, hardware settings, field documentation methods, software, software settings, and processing methods. Ground control points (GCPs) are commonly used in aerial photoscanning to achieve reliable results. This research examines multiple GCP types, flight patterns, software, hardware, and a ground based real-time kinematic (RTK) system. Multiple documentation and processing methods are examined and accuracies of each are compared for an understanding of how capturing methods will optimize site documentation.
Mckelvey, NathanKing, CharlesTerpstra, TobyHashemian, AlirezaMitchell, Steven
Accident scene data obtained from photographs and videos are vital to the analysis performed during accident reconstruction. They allow forensic analysts to precisely determine the orientation and location of evidence. For that reason, the term digital evidence was adopted and is commonly used by forensic analysts in conjunction with retro-projection methods as an aid to reconstruct the events leading up to the incident in question. Photogrammetry is a retro-projection method commonly used by analysts to match scene photographs with calibrated control points obtained from three-dimensional point cloud data collected at the subject accident site and visualized on the accident scene images. From this merger, the photographs allow for the determination of the point at which the camera was positioned that took the subject image. In general, the point cloud data allows for increased accuracy during the photogrammetry process. Video footage obtained from the scenes can be exported as frames
Morales, Roberto C.Farias, Edgar
The world is going through the fourth industrial revolution, where digital transformation is one of the global market trends. To maintain competitive advantages and sustainable businesses, an increasing number of companies and organizations are embracing digital transformation processes. These organizations are changing their business and processes and creating new business models with the help of digital technologies. Taking all industries and business models to unprecedented heights and in a certain way consolidating globalization. For such digital transformation, technologies like IoT (Internet of Things), artificial intelligence, machine learning, neural networks, and others are increasingly common. This paper seeks to define what technical aspects are involved to implement digitalization in the process of vehicle collision data analysis. In this sense, insurance companies are aware of the changes and are trying to follow the trends and updating themselves to provide better
Stano, Pedro Henrique Silva
Traffic cameras, dash-cameras, surveillance cameras, and other video sources increasingly capture critical evidence used in the accident reconstruction process. The iNPUT-ACE Camera Match Overlay tool can utilize photogrammetry to project a two-dimensional video onto three-dimensional point cloud software to enable measurements to be directly taken from the video. Those measurements are commonly used, and critical for, the determination of vehicle speed in accident reconstruction. The accuracy of the Camera Match Overlay tool has not yet been thoroughly examined. To validate the use of the tool to measure vehicle speed for accident reconstruction, data were collected from a series of tests involving three traffic cameras, a stationary and moving dash-camera, a stationary and moving cell-phone camera, and a doorbell surveillance camera. Each camera provided unique specifications of quality and focal length to ensure the tool would be tested in a variety of scenarios. Vehicles drove past
Jorgensen, MichaelSwinford, ScottJones, Brian
This paper introduces a method for calculating vehicle speed and uncertainty range in speed from video footage. The method considers uncertainty in two areas; the uncertainty in locating the vehicle’s position and the uncertainty in time interval between them. An abacus style timing light was built to determine the frame time and uncertainty of time between frames of three different cameras. The first camera had a constant frame rate, the second camera had minor frame rate variability and the third had more significant frame rate variability. Video of an instrumented vehicle traveling at different, but known, speeds was recorded by all three cameras. Photogrammetry was conducted to determine a best fit for the vehicle positions. Deviation from that best fit position that still produced an acceptable range was also explored. Video metadata reported by iNPUT-ACE and Mediainfo was incorporated into the study. When photogrammetry was used to determine a vehicle’s position and speed from
Beauchamp, GrayPentecost, DavidKoch, DanielHashemian, AlirezaMarr, JamesCordero, Rheana
Forensic disciplines are called upon to locate evidence from a single camera or static video camera, and both the angle of incidence and resolution can limit the accuracy of single image photogrammetry. This research compares a baseline of known 3D data points representing evidence locations to evidence locations determined through single image photogrammetry and evaluates the effect that object resolution (measured in pixels), and angle of incidence has on accuracy. Solutions achieved using an automated process where a camera match alignment is calculated from common points in the 2D imagery and the 3D environment, were compared to solutions achieved in a more manual method by iteratively adjusting the camera’s position, orientation, and field-of-view until an alignment is achieved. This research independently utilizes both methods to achieve photogrammetry solutions and to locate objects within a 3D environment. Results are compared for a greater understanding of the accuracies that
Terpstra, TobyHashemian, AlirezaGillihan, RobertKing, EricMiller, SethNeale, William
Photogrammetry is a commonly used and accepted technique within the field of accident reconstruction for taking measurements from photographs. Previous work has shown the accuracy of optimized close-range photogrammetry techniques to be within 2 mm compared to other high accuracy measurement techniques when using a known calibrated camera. This research focuses on the use of inverse camera close-range photogrammetry, where photographs from an unknown camera are used to model a vehicle. Photogrammetry is a measurement technique that utilizes triangulation to take measurements from photographs. The measurements are dependent on the geometry of the camera, such as the sensor size, focal length, lens type, etc. Three types of cameras were tested for accuracy; a high-end commercial camera, a point and shoot camera, and a cell phone camera. This study indicates that in a properly conducted inverse photogrammetry project, an analyst can be 95% confident the true position of a point will be
Neal, JosephFunk, CharlesSproule, David
Feasibility in Manufacturing of autonomous unmanned aerial vehicles at low cost allows the UAV developers to bring it out with numerous applications for society. Civil domain is a widely developing platform which initiated the development of UAV for civilian applications like bridge inspection, building monitoring, life or strength estimation of historical places and also outdoor and indoor mapping of buildings. These autonomous UAVs with high resolution camera fly over and around the construction sites, buildings, mines and captures images of various locations and point clouds in all sides of the building and creates a 3D map by using photogrammetry techniques. The software auto generates the report and updates it to the cloud which can be accessed online. Autonomous operations are quite difficult in new environments which requires SLAM (simultaneous localization and mapping) to operate the UAV between open spaces. This paper describes the technique of mapping a construction site
V, HariprasadMS, YaswanthV, SathishN, YamunaV, Karthick SreenivasanK, Sivakumar
The aerodynamic effects of Cold Soaked Fuel Frost have become increasingly significant as airworthiness authorities have been asked to allow it during aircraft take-off. The Federal Aviation Administration and the Finnish Transport Safety Agency signed a Research Agreement in aircraft icing research in 2015 and started a research co-operation in frost formation studies, computational fluid dynamics for ground de/anti-icing fluids, and de/anti-icing fluids aerodynamic characteristics. The main effort has been so far on the formation and aerodynamic effects of CSFF. To investigate the effects, a generic high-lift common research wind tunnel model and DLR-F15 airfoil, representing the wing of a modern jet aircraft, was built including a wing tank cooling system. Real frost was generated on the wing in a wind tunnel test section and the frost thickness was measured with an Elcometer gauge. Frost surface geometry was measured with laser scanning and photogrammetry. The aerodynamic effect of
Koivisto, PekkaSoinne, ErkkiBroeren, AndyBond, Thomas
Small unmanned aerial systems have gained prominence in their use as tools for mapping the 3-dimensional characteristics of accident sites. Typically, the process of mapping an accident site involves taking a series of overlapping, high resolution photographs of the site, and using photogrammetric software to create a point cloud or mesh of the site. This process, known as image-based scanning, is explored and analyzed in this paper. A mock accident site was created that included a stopped vehicle, a bicycle, and a ladder. These objects represent items commonly found at accident sites. The accident site was then documented with several different unmanned aerial vehicles at differing altitudes, with differing flight patterns, and with different flight control software. The photographs taken with the unmanned aerial vehicles were then processed with photogrammetry software using different methods to scale and align the point clouds. The point cloud data produced with different vehicle
Carter, NealHashemian, AlirezaMckelvey, Nathan
In this paper will be explained how photogrammetry and tracking technologies are a highly accurate alternative to accelerometers instrumented sensors related to distances calculations between objects or vehicle interior parts and the dummies. Photogrammetry is used to calculate the real-world point’s position on an image. The tracking system uses algorithms to follow points and keep the same center point at each movie frame. A software application combines these two elements to provide position, velocity, acceleration and angles of every point on the movie for the 3-dimensional axis. The tracking technology can be applied for on dummy’s analysis head impact criterion (HIC) against internal structure and objects as the pole. The use of internal sensors for this kind of analysis, only offers a yes/no response and yet tracking provides the exact distance between head and the interior components. Using tracking technology the distance between the dummy’s head and any other structural part
Molina, David Company
The accident reconstruction community relies on photogrammetry for taking measurements from photographs. Camera matching, a close-range photogrammetry method, is a particularly useful tool for locating accident scene evidence after time has passed and the evidence is no longer physically visible. In this method, objects within the accident scene that have remained unchanged are used as a reference for locating evidence that is no longer physically available at the scene such as tire marks, gouge marks, and vehicle points of rest. Roadway lines, edges of pavement, sidewalks, signs, posts, buildings, and other structures are recognizable scene features that if unchanged between the time of accident and time of analysis are beneficial to the photogrammetric process. In instances where these scene features are limited or do not exist, achieving accurate photogrammetric solutions can be challenging. Off-road incidents, snow-covered roadways, rural areas, and unpaved roadways are examples
Terpstra, TobyDickinson, JordanHashemian, Alireza
In an accident reconstruction, vehicle speeds and positions are always of interest. When provided with scene photographs or fixed-location video surveillance footage of the crash itself, close-range photogrammetry methods can be useful in locating physical evidence and determining vehicle speeds and locations. Available 3D modeling software can be used to virtually match photographs or fixed-location video surveillance footage. Dash- or vehicle-mounted camera systems are increasingly being used in light vehicles, commercial vehicles and locomotives. Suppose video footage from a dash camera mounted to one of the vehicles involved in the accident is provided for an accident reconstruction but EDR data is unavailable for either of the vehicles involved. The literature to date describes using still photos to locate fixed objects, using video taken from stationary camera locations to determine the speed of moving objects or using video taken from a moving vehicle to locate fixed objects
Manuel, Emmanuel JayMink, RichardKruger, Daniel
Photogrammetry is widely used in the automotive and accident reconstruction communities to extract three-dimensional information from photographs. Prior studies in the literature have demonstrated the accuracy of such methods when photographs contain easily-identifiable, distinct points; however, it is often desirable to determine measurements for locations where a seam, edge, or contour line is available. To exploit such details, an analyst can control the direction that the epipolar line is projected onto the camera plane by strategic selection of photographs. This process constrains the search for the corresponding 3D point to a straight line that can be projected perpendicular to the seam, edge, or contour line. Thus, the goal of this study was to evaluate the modeling accuracy for cases in which an analyst uses epipolar lines in a workflow. To do so, artificial images were created using a computer-generated camera within a computer-assisted drawing environment to allow for a known
Long, AndreaNoll, Scott Allen
Accident reconstructionists will typically document scenes, evidence, vehicles or objects of interest by using 3-dimensional laser scanners. These techniques are well documented, utilized and can be extremely accurate. However, when the subject of documentation involves surfaces that include intricate, highly reflective, and/or complex geometry (motorcycles, wheelchairs, stairs, etc.) the commercially available laser scanners can produce obscuring dense stray and scattered points which results in point clouds that could require tedious manual registration and/or optimization. This paper compares a FARO Focus laser scanner, Pix4DMapper, and Agisoft’s Photoscan point cloud data to FARO ARM measurements of vehicles, other transportation devices and architectural features. It was shown that the Pix4DMapper and Agisoft’s Photoscan point cloud data resulted in detailed and accurate point cloud data compared to the FARO ARM measurements. Additionally, the input data for Pix4DMapper and
Grimes, ClareRoescher, ToddSuway, Jeffrey AaronWelcher, Judson
Photogrammetry and the accuracy of a photogrammetric solution is reliant on the quality of photographs and the accuracy of pixel location within the photographs. A photograph with lens distortion can create inaccuracies within a photogrammetric solution. Due to the curved nature of a camera’s lens(s), the light coming through the lens and onto the image sensor can have varying degrees of distortion. There are commercially available software titles that rely on a library of known cameras, lenses, and configurations for removing lens distortion. However, to use these software titles the camera manufacturer, model, lens and focal length must be known. This paper presents two methodologies for removing lens distortion when camera and lens specific information is not available. The first methodology uses linear objects within the photograph to determine the amount of lens distortion present. This method will be referred to as the straight-line method. The second methodology utilizes
Terpstra, TobyMiller, SethHashemian, Alireza
Video and photo based photogrammetry software has many applications in the accident reconstruction community including documentation of vehicles and scene evidence. Photogrammetry software has developed in its ease of use, cost, and effectiveness in determining three dimensional data points from two dimensional photographs. Contemporary photogrammetry software packages offer an automated solution capable of generating dense point clouds with millions of 3D data points from multiple images. While alternative modern documentation methods exist, including LiDAR technologies such as 3D scanning, which provide the ability to collect millions of highly accurate points in just a few minutes, the appeal of automated photogrammetry software as a tool for collecting dimensional data is the minimal equipment, equipment costs and ease of use. This paper evaluates the accuracy and capabilities of four automated photogrammetry based software programs to accurately create 3D point clouds, by
Terpstra, TobyVoitel, TiloHashemian, Alireza
This paper presents a methodology for determining the position and speed of objects such as vehicles, pedestrians, or cyclists that are visible in video footage captured with only one camera. Objects are tracked in the video footage based on the change in pixels that represent the object moving. Commercially available programs such as PFTracktm and Adobe After Effectstm contain automated pixel tracking features that record the position of the pixel, over time, two dimensionally using the video’s resolution as a Cartesian coordinate system. The coordinate data of the pixel over time can then be transformed to three dimensional data by ray tracing the pixel coordinates onto three dimensional geometry of the same scene that is visible in the video footage background. This paper explains the automated process of first tracking pixels in the video footage, and then remapping the 2D coordinates onto three dimensional geometry using previously published projection mapping and photogrammetry
Neale, William T.Hessel, DavidKoch, Daniel
Improvements in computer image processing and identification capability have led to programs that can rapidly perform calculations and model the three-dimensional spatial characteristics of objects simply from photographs or video frames. This process, known as structure-from-motion or image based scanning, is a photogrammetric technique that analyzes features of photographs or video frames from multiple angles to create dense surface models or point clouds. Concurrently, unmanned aircraft systems have gained widespread popularity due to their reliability, low-cost, and relative ease of use. These aircraft systems allow for the capture of video or still photographic footage of subjects from unique perspectives. This paper explores the efficacy of using a point cloud created from unmanned aerial vehicle video footage with traditional single-image photogrammetry methods to recreate physical evidence at a crash scene. The unique aspects of photographs or video taken with unmanned aircraft
Carter, NealHashemian, AlirezaRose, Nathan A.Neale, William T.C.
In the field of accident reconstruction, a reconstructionist will often inspect a crash scene months or years after a crash has occurred. With this passage of time important evidence is sometimes no longer present at the scene (i.e. the vehicles involved in the crash, debris on the roadway, tire marks, gouges, paint marks, etc.). When a scene has not been totally documented with a survey by MAIT or the investigating officers, the reconstructionist may need to rely on police, fire department, security camera, or witness photographs. These photos can be used to locate missing evidence by employing traditional photogrammetric techniques. However, traditional techniques require planar surfaces, matched discrete points, or camera matching at the scene. Sometimes it is not possible to survey discrete points or perform camera matching at the scene due to lack of access (the tops of power poles, elevated bridge features, or objects at a great distance) or for safety reasons (interstate
Coleman, ClayTandy, DonaldColborn, JasonAult, Nicholas
Total quality is becoming increasingly important for competitiveness. In order to achieve high quality, the requirements must be continuously compared with the results achieved in the process. This is done by means of measurement parameters and comparative values. The acquisition of the data requires appropriate measurement methods. The measurement methods and procedures have to be constantly developed in order to measure more precisely and to generate an even higher quality. Thus, the achieved product quality can be determined absolutely and relatively. If deviations from the planned quality parameters occur, the operator will be able to intervene immediately. The presented procedure is one of the noncontact (optical) measurement methods using CMMs, 3D scanners and 3D cameras. It is a combination of stereo photography and photogrammetry. The measurement system is designed modular from any number of camera-computer-units enabling the serial and parallel interconnection of various
Schumann, Christian-AndreasForkel, EricKlein, ThomasGerlach, DieterMueller, Egon
The testing of materials that ablate as a design function requires detailed time history of the ablation process. The rate at which the surface recedes during testing is a critically important measure of the performance of thermal protection system (TPS) materials like heat shields for aerospace vehicles. Photogrammetric recession measurement (PRM) meets these needs by recording the surface of the ablating model during heating in hyperthermal test facilities (arc-jets), using two high-resolution digital cameras capable of recording simultaneously. The cameras are calibrated to yield three-dimensional object space measurement for each stereo pair of images, producing surface recession data over the portion of the model for which both cameras share a view.
Achieving and sustaining high levels of quality is an essential prerequisite for successful and sustainable development. The requirement for the description of the existing quality levels are measurement parameters and comparative values for defining the state of the art. The quality of products is defined by measurements determining information for comparing with the planned parameters. The measurement results are used to determine absolutely and relatively the achieved quality. Products change shape and form within the whole product lifecycle. The more accurate and safe the measurement of form and shape become, the better the product quality can be defined. The methods of 3D-measurement are divided into contact (mechanical) and non-contact (optical) using CMMs, 3D scanners and 3D cameras. Advanced methods exploit e.g. the stereo photography. Therefore, a scanner technology has been developed based on 3D surface stereo photography. Basis for the mathematical processing is the
Schumann, Christian-AndreasMUELLER, EgonGerlach, DieterTittmann, ClaudiaSchumann, Martin-Andreas
This paper examines a method for generating a scaled three-dimensional computer model of an accident scene from video footage. This method, which combines the previously published methods of video tracking and camera projection, includes automated mapping of physical evidence through rectification of each frame. Video Tracking is a photogrammetric technique for obtaining three-dimensional data from a scene using video and was described in a 2004 publication titled, “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction” (SAE 2004-01-1221). That paper described a method for generating a three-dimensional computer model of a roadway by using video of a drive-through of an accident scene and processing this video footage through available video tracking software.1,2 The benefit of being able to drive through an accident scene to collect data lies in the speed of such a method, but also in safety, as some accident areas are too heavy with traffic
Neale, William T.Marr, JamesHessel, David
Items per page:
1 – 50 of 114