This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
Camera Alignment System for Passive Safety Crash Tests
Technical Paper
2017-01-1675
ISSN: 0148-7191, e-ISSN: 2688-3627
This content contains downloadable datasets
Annotation ability available
Sector:
Language:
English
Abstract
Nowadays, the use of high-speed digital cameras to acquire relevant information is a standard for all laboratories and facilities working in passive safety crash testing. The recorded information from the cameras is used to develop and improve the design of vehicles in order to make them safer. Measurements such as velocities, accelerations and distances are computed from high-speed images captured during the tests and represent remarkable data for the post-crash analysis. Therefore, having the exact same position of the cameras is a key factor to be able to compare all the values that are extracted from the images of the tests carried out within a long-term passive safety project. However, since working with several customers involves a large amount of different cars and tests, crash facilities have to readapt for every test mode making it difficult for them to reproduce the correct and precise position of the high-speed cameras throughout the same project. Thus, the system developed through this work uses an image processing algorithm to locate the cameras by taking images from at least four sides of the laboratory with multiple cameras, avoiding occlusion problems, by recognizing the binary square markers attached to the high-speed camera housing. Furthermore, relevant information such as the serial numbers, focal lengths and/or calibration files; can be stored in each camera fiducial marker in order to increase the accuracy of the test repeatability.
Recommended Content
Authors
Topic
Citation
Mensa, G., Parera, N., and Fornells, A., "Camera Alignment System for Passive Safety Crash Tests," SAE Technical Paper 2017-01-1675, 2017, https://doi.org/10.4271/2017-01-1675.Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Unnamed Dataset 1 |
Also In
References
- Abderyim , P. Halabi , O. Fujimoto T. , and Chiba N. 2008 Accurate and Efficient Drawing Method for Laser Projection The Journal of the Society for Art and Science 7 4 155 169
- Karen Mason 2004 Laser projection systems improve composite ply placement High-PerformanceComposites 3 1 2004
- Youlu Wang 2013 Distributed Multi-object Tracking with Multi-camera Systems Composed of Overlapping and Non-overlapping Cameras Electrical Engineering Theses and Dissertations 47
- Kato , H. Billinghurst , M. 1999 Marker tracking and HMD calibration for a Video-Based augmented reality conferencing system, Augmented Reality International Workshop on 0 85 94
- van Krevelen D.W.F. and Poelman , R. 2010 A Survey of Augmented Reality Technologies, Applications and Limitations The International Journal of Virtual Reality 9 2 1 20
- Bramberger , M. Doblander , A. Maier , A. Rinner B. and Schwabach , H. 2006 Distributed embedded smart cameras for surveillance applications IEEE Computer 39 68 75
- Pengfei Han , Gang Zhao 2016 L-split marker for augmented reality in aircraft assembly Optical Engineering 55 4 043110
- Mikolajczyk , K. Schmid , C. 2004 Scale & Affine Invariant Interest Point Detectors International Journal of Computer Vision 60 1 63 86
- Andrew C. Rice , Robert K. Harle , Alastair R. Beresford 2006 Analysing fundamental properties of marker-based vision system designs Pervasive and Mobile Computing
- Fu Chang and Chun-jen Chen and Chi-jen Lu 2004 A linear-time component-labeling algorithm using contour tracing technique Computer Vision and Image Understanding