Browse Topic: Optics
Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata from the original file. However, within many of these video files, timestamp information is visually imprinted onto each frame. Analyses that rely on timing of events captured in video may benefit from these imprinted timestamps, but for forensic purposes, it is important to establish the accuracy and reliability of these timestamps. The purpose of this research is to examine the accuracy of these timestamps and to establish if they can be used to determine the timing
This paper introduces an innovative digital solution for the categorization and analysis of fractures in Auto components, leveraging Artificial Intelligence and Machine Learning (AI/ML) technologies. The proposed system automates the fracture analysis process, enhancing speed, reliability, and accessibility for users with varying levels of expertise. The platform enables users to upload images of fractured parts, which are then processed by an AI/ML engine. The engine employs an image classification model to identify the type of fracture and a segmentation model to detect and analyze the direction of the fracture. The segmentation model accurately predicts cracks in the images, providing detailed insights into the direction and progression of the fractures. Additionally, the solution offers an intuitive interface for stakeholders to review past analyses and upload new images for examination. The AI/ML engine further examines the origin of the fracture, its progression pattern, and the
Image-based machine learning (ML) methods are increasingly transforming the field of materials science, offering powerful tools for automatic analysis of microstructures and failure mechanisms. This paper provides an overview of the latest advancements in ML techniques applied to materials microstructure and failure analysis, with a particular focus on the automatic detection of porosity and oxide defects and microstructure features such as dendritic arms and eutectic phase in aluminum casting. By leveraging image-based data, such as metallographic and fractographic images, ML models can identify patterns that are difficult to detect through conventional methods. The integration of convolutional neural networks (CNNs) and advanced image processing algorithms not only accelerates the analysis process but also improves accuracy by reducing subjectivity in interpretation. Key studies and applications are further reviewed to highlight the benefits, challenges, and future directions of
Light Detection and Ranging (LiDAR) is a promising type of sensor for autonomous driving that utilizes laser technology to provide perceptions and accurate distance measurements of obstacles in the vehicle path. In recent years, there has also been a rise in the implementation of LiDARs in modern and autonomous vehicles to aid self-driving features. However, navigating adverse weather remains one of the biggest challenges in achieving Level 5 full autonomy due to sensor soiling, leading to performance degradation that can pose safety hazards. When driving in rain, raindrops impact the LiDAR sensor assembly and cause attenuation of signals when the light beams undergo reflections and refractions. Consequently, signal detectability, accuracy, and intensity are significantly affected. To date, limited studies have been able to perform objective evaluations of LiDAR performance, most of which faced limitations that hindered realistic, controllable, and repeatable testing. Therefore, this
The rapid development of autonomous vehicles necessitates rigorous testing under diverse environmental conditions to ensure their reliability and safety. One of the most challenging scenarios for both human and machine vision is navigating through rain. This study introduces the Digitrans Rain Testbed, an innovative outdoor rain facility specifically designed to test and evaluate automotive sensors under realistic and controlled rain conditions. The rain plant features a wetted area of 600 square meters and a sprinkled rain volume of 600 cubic meters, providing a comprehensive environment to rigorously assess the performance of autonomous vehicle sensors. Rain poses a significant challenge due to the complex interaction of light with raindrops, leading to phenomena such as scattering, absorption, and reflection, which can severely impair sensor performance. Our facility replicates various rain intensities and conditions, enabling comprehensive testing of Radar, Lidar, and Camera
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
Intelligent Structural Health Monitoring (SHM) of bridge is a technology that utilizes advanced sensor technology along with professional bridge engineering knowledge, coupled with machine vision and other intelligent methods for continuously monitoring and evaluating the status of bridge structures. One application of SHM technology for bridges by way of machine learning is in the use of damage detection and quantification. In this way, changes in bridge conditions can be analyzed efficiently and accurately, ensuring stable operational performance throughout the lifecycle of the bridge. However, in the field of damage detection, although machine vision can effectively identify and quantify existing damages, it still lacks accuracy for predicting future damage trends based on real-time data. Such shortfall l may lead to late addressing of potential safety hazards, causing accelerated damage development and threatening structural safety. To tackle this problem, this study designs a deep
Items per page:
50
1 – 50 of 9964