Browse Topic: Optics

Items (9,964)
With the growing diversification of modern urban transportation options, such as delivery robots, patrol robots, service robots, E-bikes, and E-scooters, sidewalks have gained newfound importance as critical features of High-Definition (HD) Maps. Since these emerging modes of transportation are designed to operate on sidewalks to ensure public safety, there is an urgent need for efficient and optimal sidewalk routing plans for autonomous driving systems. This paper proposed a sidewalk route planning method using a cost-based A* algorithm and a mini-max-based objective function for optimal routes. The proposed cost-based A* route planning algorithm can generate different routes based on the costs of different terrains (sidewalks and crosswalks), and the objective function can produce an efficient route for different routing scenarios or preferences while considering both travelling distance and safety levels. This paper’s work is meant to fill the gap in efficient route planning for
Bao, ZhibinLang, HaoxiangLin, Xianke
Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata from the original file. However, within many of these video files, timestamp information is visually imprinted onto each frame. Analyses that rely on timing of events captured in video may benefit from these imprinted timestamps, but for forensic purposes, it is important to establish the accuracy and reliability of these timestamps. The purpose of this research is to examine the accuracy of these timestamps and to establish if they can be used to determine the timing
Molnar, BenjaminTerpstra, TobyVoitel, Tilo
This paper explores the integration of two deep learning models that are currently being used for object detection, specifically Mask R-CNN and YOLOX, for two distinct driving environments: urban cityscapes and highway settings. The hypothesis underlying this work is that different methods of object detection will work best in different driving environments, due to the differences in their unique strengths as well as the key differences in those driving environments. Some of these differences in the driving environment include varying traffic densities, diverse object classes, and differing scene complexities, including specific differences such as the types of signs present, the presence or absence of stoplights, and the limited-access nature of highways as compared to city streets. As part of this work, a scene classifier has also been developed to categorize the driving context into the two categories of highway and urban driving, in order to allow the overall object detection
Patel, KrunalPeters, Diane
Dash cameras (dashcams) can provide collision reconstructionists with quantifiable vehicle position and speed estimates. These estimates are achieved by tracking 2D video features with camera-tracking software to solve for the time history of camera position, and speed can then be calculated from the position-time history. Not all scenes have the same geometric features in quality or abundance. In this study, we compared the vehicle position and derived-speed estimates from dashcam video for different numbers and spatial distributions of tracked features that mimicked the continuum between barren environments and feature-rich environments. We used video from a dashcam mounted in a vehicle undergoing straight-line emergency braking. The surrounding environment had abundant trackable features on both sides of the road, including road markings, streetlights, signs, trees, and buildings. We first created a reference solution using SynthEyes, a 3D camera- and object-tracking program, and
Young, ColeAhrens, MatthewFlynn, ThomasSiegmund, Gunter P.
This paper introduces an innovative digital solution for the categorization and analysis of fractures in Auto components, leveraging Artificial Intelligence and Machine Learning (AI/ML) technologies. The proposed system automates the fracture analysis process, enhancing speed, reliability, and accessibility for users with varying levels of expertise. The platform enables users to upload images of fractured parts, which are then processed by an AI/ML engine. The engine employs an image classification model to identify the type of fracture and a segmentation model to detect and analyze the direction of the fracture. The segmentation model accurately predicts cracks in the images, providing detailed insights into the direction and progression of the fractures. Additionally, the solution offers an intuitive interface for stakeholders to review past analyses and upload new images for examination. The AI/ML engine further examines the origin of the fracture, its progression pattern, and the
Sahoo, PriyabrataRawat, SudhanshuGarg, VipinNaidu, GarimaSharma, AmitNarula, RahulBindra, RiteshKhera, PankajGoel, PoojaMondal, Arup
Vehicle ADAS Systems majorly comprises of two functions: Driving and Parking. The most common form of damage to the vehicle which goes unnoticed with unidentified cause are parking damages. A vehicle once parked at a certain location may get damaged without knowledge of the user. In this work developed a solution that not only pre-warns the driver but also prepares the vehicle beforehand if it suspects a damage may occur. This eliminates the latency between damage and information capture, detects small damages such as scratches, classifies the type of damage and informs the user beforehand. This is solution is different from our competitors as the existing solutions informs the user about the scratches/damages, but these solutions are expensive, have high response time, and the damage information is captured after the damage has occurred. The solution consists of the following check blocks: Precondition, Sensor Control and Action Module. The Precondition Module observes the vehicle
Debnath, SarnabPatil, PrasadBelur Subramanya, SheshagiriGovinda, Shiva Prasad
Accurate reconstruction of vehicle collisions is essential for understanding incident dynamics and informing safety improvements. Traditionally, vehicle speed from dashcam footage has been approximated by estimating the time duration and distance traveled as the vehicle passes between reference objects. This method limits the resolution of the speed profile to an average speed over given intervals and reduces the ability to determine moments of acceleration or deceleration. A more detailed speed profile can be calculated by solving for the vehicle’s position in each video frame; however, this method is time-consuming and can introduce spatial and temporal error and is often constrained by the availability of external trackable features in the surrounding environment. Motion tracking software, widely used in the visual effects industry to track camera positions, has been adopted by some collision reconstructionists for determining vehicle speed from video. This study examines the
Perera, NishanGriffiths, HarrisonPrentice, Greg
Photogrammetry is a commonly used type of analysis in accident reconstruction. It allows the location of physical evidence, as shown in photographs and video, and the position and orientation of vehicles, other road users, and objects to be quantified. Lens distortion is an important consideration when using photogrammetry. Failure to account for lens distortion can result in inaccurate spatial measurements, particularly when elements of interest are located toward the edges and corners of images. Depending on whether the camera properties are known or unknown, various methods for removing lens distortion are commonly used in photogrammetric analysis. However, many of these methods assume that lens distortion is the result of a spherical lens or, more rarely, is solely due to distortion caused by other known lens types and has not been altered algorithmically by the camera. Today, several cameras on the market algorithmically alter images before saving them. These camera systems use
Pittman, KathleenMockensturm, EricBuckman, TaylorWhite, Kirsten
Blistering in aesthetic parts poses a significant challenge, affecting overall appearance and eroding brand image from the customer's perspective and blister defects disrupt painting line efficiency, resulting in increased rework and rejection rates. This paper investigates the causes and effects of blistering, particularly in the context of internal soundness of Aluminum castings, emphasizing the crucial role of Computed Tomography in defect analysis. Computed Tomography is an advanced Non-Destructive Testing technique used to examine the internal soundness of a material. This study follows a structured 7-step QC story approach, from problem identification to standardization, to accurately identify the root Cause and implement corrective actions to eliminate blister defect. The findings reveal a strong link between internal soundness and surface quality. Based on the root cause, changes in the casting process and die design were made to improve internal soundness, leading to reduced
D, BalachandarNataraj, Naveenkumar
Mechanical analysis was performed of a non-pneumatic tire, specifically a Michelin Tweel size 18x8.5N10, that can be used up to a speed of 40 km/h. A Parylene-C coating was added to the rubber spoke specimens before performing both microscopic imaging and cyclic tensile testing. Initially, standard ASTM D412 specimens type C and A were cut from the wheel spokes, and then the specimens were subjected to deposition of a nanomaterial. The surfaces of the specimens were prepared in different ways to examine the influence on the material behavior including the stiffness and hysteresis. Microscopic imaging was performed to qualitatively compare the surfaces of the coated and uncoated specimens. Both coated and uncoated spoke specimens of each standard type were then subjected to low-rate cyclic tensile tests up to 500% strain. The results showed that the Parylene-C coating did not affect the maximum stress in the specimens, but did increase the residual strain. Type C specimens also had a
Collings, WilliamLi, ChengzhiSchwarz, JacksonLakhtakia, AkhleshBakis, CharlesEl-Sayegh, ZeinabEl-Gindy, Moustafa
Image-based machine learning (ML) methods are increasingly transforming the field of materials science, offering powerful tools for automatic analysis of microstructures and failure mechanisms. This paper provides an overview of the latest advancements in ML techniques applied to materials microstructure and failure analysis, with a particular focus on the automatic detection of porosity and oxide defects and microstructure features such as dendritic arms and eutectic phase in aluminum casting. By leveraging image-based data, such as metallographic and fractographic images, ML models can identify patterns that are difficult to detect through conventional methods. The integration of convolutional neural networks (CNNs) and advanced image processing algorithms not only accelerates the analysis process but also improves accuracy by reducing subjectivity in interpretation. Key studies and applications are further reviewed to highlight the benefits, challenges, and future directions of
Akbari, MeysamWang, AndyWang, QiguiYan, Cuifen
Lane-keeping is critical for SAE Level 3+ autonomous vehicles, requiring rigorous validation and end-to-end interpretability. All recently U.S.-approved level 3 vehicles are equipped with lidar, likely for accelerating active safety. Lidar offers direct distance measurements, allowing rule-based algorithms compared to camera-based methods, which rely on statistical methods for perception. Furthermore, lidar can support a more comprehensive and detailed approach to studying lane-keeping. This paper proposes a module perceiving oncoming vehicle behavior, as part of a larger behavior-tree structure for adaptive lane-keeping using data from a lidar sensor. The complete behavior tree would include road curvature, speed limits, road types (rural, urban, interstate), and the proximity of objects or humans to lane markings. It also accounts for the lane-keeping behavior, type of adjacent and opposing vehicles, lane occlusion, and weather conditions. The algorithm was evaluated using
Soloiu, ValentinMehrzed, ShaenKroeger, LukePierce, KodySutton, TimothyLange, Robin
Off-road vehicles are required to traverse a variety of pavement environments, including asphalt roads, dirt roads, sandy terrains, snowy landscapes, rocky paths, brick roads, and gravel roads, over extended periods while maintaining stable motion. Consequently, the precise identification of pavement types, road unevenness, and other environmental information is crucial for intelligent decision-making and planning, as well as for assessing traversability risks in the autonomous driving functions of off-road vehicles. Compared to traditional perception solutions such as LiDAR and monocular cameras, stereo vision offers advantages like a simple structure, wide field of view, and robust spatial perception. However, its accuracy and computational cost in estimating complex off-road terrain environments still require further optimization. To address this challenge, this paper proposes a terrain environment estimating method for off-road vehicle anticipated driving area based on stereo
Zhao, JianZhang, XutongHou, JieChen, ZhigangZheng, WenboGao, ShangZhu, BingChen, Zhicheng
Apple’s mobile phone LiDAR capabilities can be used with multiple software applications to capture the geometry of vehicles and smaller objects. The results from different software have been previously researched and compared to traditional ground-based LiDAR. However, results were inconsistent across software applications, with some software being more accurate and others being less accurate. (Technical Paper 2023-01-0614. Miller, Hashemian, Gillihan, Benes.) This paper builds upon existing research by utilizing the updated LiDAR hardware that Apple has added to its iPhone 15 smartphone lineup. This new hardware, in combination with the software application PolyCam, was used to scan a variety of crashed vehicles. These crashed vehicles were also scanned using a FARO 3D scanners and Leica RTC 360 scanners, which have been researched extensively for their accuracy. The PolyCam scans were compared to FARO and Leica scans to determine accuracy for point location and scaling. Previous
Miller, Seth HigginsStogsdill, MichaelMcWhirter, Seth
Deliberate modifications to infrastructure can significantly enhance machine vision recognition of road sections designed for Vulnerable Road Users, such as green bike lanes. This study evaluates how green bike lanes, compared to unpainted lanes, enhance machine vision recognition and vulnerable road users safety by keeping vehicles at a safe distance and preventing encroachment into designated bike lanes. Conducted at the American Center for Mobility, this study utilizes a vehicle equipped with a front-facing camera to assess green bike lane recognition capabilities across various environmental conditions including dry daytime, dry nighttime, rain, fog, and snow. Data collection involved gathering a comprehensive dataset under diverse conditions and generating masks for lane markings to perform comparative analysis for training Advanced Driver Assistance Systems. Quality measurement and statistical analysis are used to evaluate the effectiveness of machine vision recognition using
Ponnuru, Venkata Naga RithikaDas, SushantaGrant, JosephNaber, JeffreyBahramgiri, Mojtaba
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software includes these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software 3ds Max to determine its accuracy for use in accident reconstruction. A parking lot was scanned using a FARO LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment, and photographs were taken at various times throughout the day from the same location. This environment was 3D modeled in 3ds Max based on the point cloud, and the sun system in 3ds Max was configured using the
Barreiro, EvanErickson, MichaelSmith, ConnorCarter, NealHashemian, Alireza
In this study, we introduce RGB2BEV-Net, an end-to-end pipeline that extends traditional BEV segmentation models by utilizing raw RGB images with Bird’s Eye View (BEV) generation. While previous work primarily focused on pre-segmented images to generate corresponding BEV maps, our approach expands this by collecting RGB images alongside their affiliated segmentation masks and BEV representations. This enables direct input of RGB camera sensors into the pipeline, reflecting real-world autonomous driving scenarios where RGB cameras are commonly used as sensors, rather than relying on pre-segmented images. Our model processes four RGB images through a segmentation layer before converting them into a segmented BEV, implemented in the PyTorch framework after being adapted from an original implementation that utilized a different framework. This adaptation was necessary to improve compatibility and ensure better integration of the entire system within autonomous vehicle applications. We
Hossain, SabirLin, Xianke
Toyota vehicles equipped with Toyota Safety Sense (TSS) can record detailed information surrounding various driving events, including crashes. Often, this data is employed in accident reconstruction. TSS data is comprised of three main categories: Vehicle Control History (VCH), Freeze Frame Data (FFD), and image records. Because the TSS data resides in multiple Electronic Control Units (ECUs), the data recording is susceptible to catastrophic power loss. In this paper, the effects of a sudden power loss on the VCH, FFD, and images are studied. Events are triggered on a TSS 2.5+ equipped vehicle by driving toward a stationary target. After system activation, a total power loss is induced at various delays after activation. Results show that there is a minimum time required after system initiation in order to obtain full VCH, FFD, and image records. Power losses occurring within this time frame produce incomplete records. Data accuracy is unaffected, even in partial records.
Getz, CharlesDiSogra, MatthewSpivey, HeathJohnson, TaylorPatel, Amit
Hydro-pneumatic suspension is widely used due to its favorable nonlinear stiffness and damping characteristics. However, with the presence of parameter uncertainties and high nonlinearities in the hydro-pneumatic suspension system, the effectiveness of the controller is often suboptimal in practical applications. To mitigate the influence of these issues on the control performance, an adaptive sliding mode control method with an expanded state observer (ESO) is proposed. Firstly, a nonlinear mathematical model of hydro-pneumatic suspension, considering seal friction, is established based on the hydraulic principle and the knowledge of fluid mechanics. Secondly, the ESO is designed to estimate the total disturbance caused by the nonlinearities and uncertainties, and it is incorporated into the sliding mode control law, allowing the control law to adapt to the operating state of the suspension system in real time, which solves the effect of uncertainties and nonlinearities on the system
Niu, ChangshengLiu, XiaoangJia, XingGong, BoXu, Bo
The accident reconstruction community frequently uses Terrestrial LiDAR (TLS) to capture accurate 3D images of vehicle accident sites. This paper compares the accuracy, workflow, benefits, and challenges of Unmanned Aerial Vehicle (UAV) LiDAR, or Airborne Laser Scanning (ALS), to TLS. Two roadways with features relevant to accident reconstruction were selected for testing. ALS missions were conducted at an altitude of 175 feet and a velocity of 4 miles per hour at both sites, followed by 3D scanning using TLS. Survey control points were established to minimize error during cloud-to- cloud TLS registration and to ensure accurate alignment of ALS and TLS point clouds. After data capture, the ALS point cloud was analyzed against the TLS point cloud. Approximately 80% of ALS points were within 1.8 inches of the nearest TLS point, with 64.8% at the rural site and 59.7% at the suburban site within 1.2 inches. These findings indicate that UAV-based LiDAR can achieve comparable accuracy to TLS
Foltz, StevenTerpstra, TobyClarson, Julia
Camera matching photogrammetry is widely used in the field of accident reconstruction for mapping accident scenes, modeling vehicle damage from post collision photographs, analyzing sight lines, and video tracking. A critical aspect of camera matching photogrammetry is determining the focal length and Field of View (FOV) of the photograph being analyzed. The intent of this research is to analyze the accuracy of the metadata reported focal length and FOV. The FOV from photographs captured by over 20 different cameras of various makes, models, sensor sizes, and focal lengths will be measured using a controlled and repeatable testing methodology. The difference in measured FOV versus reported FOV will be presented and analyzed. This research will provide analysts with a dataset showing the possible error in metadata reported FOV. Analysts should consider the metadata reported FOV as a starting point for photogrammetric analysis and understand that the FOV calculated from the image
Smith, Connor A.Erickson, MichaelHashemian, Alireza
Light Detection and Ranging (LiDAR) is a promising type of sensor for autonomous driving that utilizes laser technology to provide perceptions and accurate distance measurements of obstacles in the vehicle path. In recent years, there has also been a rise in the implementation of LiDARs in modern and autonomous vehicles to aid self-driving features. However, navigating adverse weather remains one of the biggest challenges in achieving Level 5 full autonomy due to sensor soiling, leading to performance degradation that can pose safety hazards. When driving in rain, raindrops impact the LiDAR sensor assembly and cause attenuation of signals when the light beams undergo reflections and refractions. Consequently, signal detectability, accuracy, and intensity are significantly affected. To date, limited studies have been able to perform objective evaluations of LiDAR performance, most of which faced limitations that hindered realistic, controllable, and repeatable testing. Therefore, this
Pao, Wing YiLi, LongAgelin-Chaab, MartinRoy, LangisKnutzen, JulianBaltazar, AlexisMuenker, KlausChakraborty, AnirbanKomar, John
The current leading experimental platform for engine visualization research is the optical engine, which features transparent window components classified into two types: partially visible windows and fully visible windows. Due to structural limitations, fully visible windows cannot be employed under certain complex or extreme operating conditions, leading to the acquisition of only local in-cylinder combustion images and resulting in information loss. This study introduces a method for reconstructing in-cylinder combustion images from local images using deep learning techniques. The experiments were conducted using an optical engine specifically designed for spark-ignition combustion modes, capturing in-cylinder flame images under various conditions with high-speed cameras. The primary focus was on reconstructing the flame edge, with in-cylinder combustion images categorized into three types: images where the flame edge is fully within the partially visible window, partly within the
Wang, MianhengZhang, YixiaoDu, HaoyuXiao, MaMao, JianshuFang, Yuwen
This study outlines a camera-based perspective transformation method for measuring driver direct visibility, which produces 360-degree view maps of the nearest visible ground points. This method is ideal for field data collection due to its portability and minimal space requirements. Compared with ground truth assessments using a physical grid, this method was found to have a high level of accuracy, with all points in the vehicle front varying less than 0.30 m and varying less than 0.6 m for the A- and B-pillars. Points out of the rear window varied up to 2.4 m and were highly sensitive to differences in the chosen pixel due to their greater distance from the camera. Repeatability through trials of multiple measurements per vehicle and reproducibility through measures from multiple data collectors produced highly similar results, with the greatest variations ranging from 0.19 to 1.38 m. Additionally, three different camera lenses were evaluated, resulting in comparable results within
Mueller, BeckyBragg, HadenBird, Teddy
Abstract This paper introduces a method to solve the instantaneous speed and acceleration of a vehicle from one or more sources of video evidence by using optimization to determine the best fit speed profile that tracks the measured path of a vehicle through a scene. Mathematical optimization is the process of seeking the variables that drive an objective function to some optimal value, usually a minimum, subject to constraints on the variables. In the video analysis problem, the analyst is seeking a speed profile that tracks measured vehicle positions over time. Measured positions and observations in the video constrain the vehicle’s motion and can be used to determine the vehicle’s instantaneous speed and acceleration. The variables are the vehicle’s initial speed and an unknown number of periods of approximately constant acceleration. Optimization can be used to determine the speed profile that minimizes the total error between the vehicle’s calculated distance traveled at each
Snyder, SeanCallahan, MichaelWilhelm, ChristopherJohnk, ChrisLowi, AlvinBretting, Gerald
In direct injected engines the spray formation is important for both combustion performance and emission formation. Thus, being able to compare how the spray formation is affected by changes in nozzle design, injection pressure or fuel formulation is an important area of research for all engine sizes. This becomes especially important for the introduction of new sustainable fuels, or for fuel injection optimization to increase efficiencies and minimize the formation of emissions such as particles. High-speed imaging of the fuel spray using the schlieren technique is well established for this purpose, and the Engine Combustion Network (ECN) has developed multiple guidelines to ensure that a similar experimental approach is used in different laboratories around the world. For the initial image processing, the ECN provides a procedure based on an image-temporal-derivative approach. Many researchers however rely on intensity-based thresholding, preceded by contrast adjustment, background
Sileghem, VictorLarsson, TaraDejaegere, QuintenVerhelst, Sebastian
Videos from cameras onboard a moving vehicle are increasingly available to collision reconstructionists. The goal of this study was to evaluate the accuracy of speeds, decelerations, and brake onset times calculated from onboard dash cameras (“dashcams”) using a match-moving technique. We equipped a single test vehicle with 5 commercially available dashcams, a 5th wheel, and a brake pedal switch to synchronize the cameras and 5th wheel. The 5th wheel data served as the reference for the vehicle kinematics. We conducted 9 tests involving a constant-speed approach (mean ± standard deviation = 57.6 ± 2.0 km/h) followed by hard braking (0.989 g ± 0.021 g). For each camera and brake test, we extracted the video and calculated the camera’s position in each frame using SynthEyes, a 3D motion tracking and video analysis program. Scale and location for the analyses were based on a 3D laser scan of the test site. From each camera’s position data, we calculated its speed before braking and its
Flynn, ThomasAhrens, MatthewYoung, ColeSiegmund, Gunter P.
The rapid development of autonomous vehicles necessitates rigorous testing under diverse environmental conditions to ensure their reliability and safety. One of the most challenging scenarios for both human and machine vision is navigating through rain. This study introduces the Digitrans Rain Testbed, an innovative outdoor rain facility specifically designed to test and evaluate automotive sensors under realistic and controlled rain conditions. The rain plant features a wetted area of 600 square meters and a sprinkled rain volume of 600 cubic meters, providing a comprehensive environment to rigorously assess the performance of autonomous vehicle sensors. Rain poses a significant challenge due to the complex interaction of light with raindrops, leading to phenomena such as scattering, absorption, and reflection, which can severely impair sensor performance. Our facility replicates various rain intensities and conditions, enabling comprehensive testing of Radar, Lidar, and Camera
Feichtinger, Christoph Simon
Towards the goal of real-time navigation of autonomous robots, the Iterative Closest Point (ICP) based LiDAR odometry methods are a favorable class of Simultaneous Localization and Mapping (SLAM) algorithms for their robustness under any light conditions. However, even with the recent methods, the traditional SLAM challenges persist, where odometry drifts under adversarial conditions such as featureless or dynamic environments, as well as high motion of the robots. In this paper, we present a motion-aware continuous-time LiDAR-inertial SLAM framework. We introduce an efficient EKF-ICP sensor fusion solution by loosely coupling poses from the continuous time ICP and IMU data, designed to improve convergence speed and robustness over existing methods while incorporating a sophisticated motion constraint to maintain accurate localization during rapid motion changes. Our framework is evaluated on the KITTI datasets and artificially motion-induced dataset sequences, demonstrating
Kokenoz, CigdemShaik, ToukheerSharma, AbhishekPisu, PierluigiLi, Bing
This study experimentally investigates the liquid jet breakup process in a vaporizer of a microturbine combustion chamber under equivalent operating conditions, including temperature and air mass flow rate. A high-speed camera experimental system, coupled with an image processing code, was developed to analyze the jet breakup length. The fuel jet is centrally positioned in a vaporizer with an inner diameter of 8mm. Airflow enters the vaporizer at controlled pressures, while thermal conditions are maintained between 298 K and 373 K using a PID-controlled heating system. The liquid is supplied through a jet with a 0.4 mm inner diameter, with a range of Reynolds numbers (Reliq = 2300÷3400), and aerodynamic Weber numbers (Weg = 4÷10), corresponding to the membrane and/or fiber breakup modes of the liquid jet. Based on the results of jet breakup length, a new model has been developed to complement flow regimes by low Weber and Reynolds numbers. The analysis of droplet size distribution
Ha, NguyenQuan, NguyenManh, VuPham, Phuong Xuan
This study investigates the ignitability of hydrogen in an optical heavy-duty SI engine. While the ignition energy of hydrogen is exceptionally low, the high load and lean mixtures used in heavy-duty hydrogen engines lead to a high gas density, resulting in a much higher breakdown voltage than in light-duty SI engines. Spark plug wear is a concern, so there is a need to minimise the spark energy while maintaining combustion stability, even at challenging conditions for ignition. This work consists of a two-stage experimental study performed in an optical engine. In the first part, we mapped the combustion stability and frequency of misfires with two different ignition systems: a DC inductive discharge ignition system, and a closed-loop controlled capacitive AC system. The equivalence ratio and dwell time were varied for the inductive system while the capacitive system instead varied spark duration and spark current in addition to equivalence ratio. A key finding was that spark energy
Hallstadius, PeterSaha, AnupamSridhara, AravindAndersson, Öivind
LiDAR sensors have become an integral component in the realm of autonomous driving, widely utilized in environmental perception and vehicle navigation. However, in real-world road environments, contaminants such as dust and dirt can severely hamper the cleanliness of LiDAR optical windows, thereby degrading operational performance and affecting the overall environmental perception capabilities of intelligent driving systems. Consequently, maintaining the cleanliness of LiDAR optical windows is crucial for sustaining device performance. Unfortunately, the scarcity of publicly available LiDAR contamination datasets poses a challenge to the research and development of contamination identification algorithms. This paper first introduces a method for acquiring LiDAR-pollution datasets. LiDAR data acquisition on urban open roads simulates different types of pollution, including mud and leaves. The constructed dataset meticulously differentiates among the three states with clear labels: no
Wei, ZiyuQuo, BinyunLujia, RanLi, Liguang
Vehicle-to-Infrastructure (V2I) cooperation has emerged as a fundamental technology to overcome the limitations of the individual ego-vehicle perception. Onboard perception is limited by the lack of information for understanding the environment, the lack of anticipation, the drop of performance due to occlusions and the physical limitations of embedded sensors. The perception of V2I in a cooperative manner improves the perception range of the ego vehicle by receiving information from the infrastructure that has another point of view, mounted with sensors, such as camera and LiDAR. This technical paper presents a perception pipeline developed for the infrastructure based on images with multiple viewpoints. It is designed to be scalable and has five main components: the image acquisition for the modification of camera settings and to get the pixel data, the object detection for fast and accurate detection of four wheels, two wheels and pedestrians, the data fusion module for robust
Picard, QuentinMorice, MaloFadili, MaryemPechberti, Steve
This study investigates the nonlinear correlation between laser welding parameters and weld quality, employing machine learning techniques to enhance the predictive accuracy of tensile lap shear strength (TLS) in automotive QP1180 high-strength steel joints. By incorporating three algorithms: random forest (RF), backpropagation neural network (BPNN), and K-nearest neighbors regression (KNN), with Bayesian optimization (BO), an efficient predictive model has been developed. The results demonstrated that the RF model optimized by the BO algorithm performed best in predicting the strength of high-strength steel plate-welded joints, with an R 2 of 0.961. Furthermore, the trained RF model was applied to identify the parameter combination for the maximum TLS value within the selected parameter range through grid search, and its effectiveness was experimentally verified. The model predictions were accurate, with errors controlled within 6.73%. The TLS obtained from the reverse-selected
Han, JinbangJi, YuxiangLiu, YongLiu, ZhaoWang, XianhuiHan, WeijianWu, Kun
The tensile and low-cycle fatigue (LCF) properties of Ti6Al4V specimens, manufactured using the selective laser melting (SLM) additive manufacturing (AM) process and subsequently heat-treated in argon, were investigated at elevated temperatures. Specifically, fully reversed strain-controlled tests were performed at 400°C to determine the strain-life response of the material over a range of strain amplitudes of industrial interest. Fatigue test results from this work are compared to those found in the literature for both AM and wrought Ti6Al4V. The LCF response of the material tested here is in-family with the AM data found in the literature. Scanning electron microscopy performed on the fracture surfaces indicate a marked increase in secondary cracking (crack branching) as a function of increased plastic deformation and demonstrating equivalent performance when compared to the wrought Ti6AL4V at RT (room temperature) at 1.4% strain amplitude and better performance when compared to the
Gadwal, Narendra KumarBarkey, Mark E.Hagan, ZachAmaro, RobertMcDuffie, Jason G.
Roadside perception technology is an essential component of traffic perception technology, primarily relying on various high-performance sensors. Among these, LiDAR stands out as one of the most effective sensors due to its high precision and wide detection range, offering extensive application prospects. This study proposes a voxel density-nearest neighbor background filtering method for roadside LiDAR point cloud data. Firstly, based on the relatively fixed nature of roadside background point clouds, a point cloud filtering method combining voxel density and nearest neighbor is proposed. This method involves voxelizing the point cloud data and using voxel grid density to filter background point clouds, then the results are processed through a neighbor point frame sequence to calculate the average distance of the specified points and compare with a distance threshold to complete accurate background filtering. Secondly, a VGG16-Pointpillars model is proposed, incorporating a CNN
Liu, ZhiyuanRui, Yikang
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
Zhan, KaiDiGao, ChengfaXu, DaweiLan, MinyiDing, Rongjing
In an attempt to improve its mechanical characteristics in the as-fasted conditions, the AZ31 Mg alloy was investigated herein from being reinforced with diverse SiC weight percentages (3, 6, and 9 wt.%). To develop lightweight AZ31-SiC composites, a simple and inexpensive technique, the stir casting process, was used. Microstructural analysis of the as-cast samples showed that the SiC particles were distributed rather uniformly, were firmly bonded to the matrix, and had very little porosity. The substantial improvement in tensile, compressive, and hardness characteristics was caused by fragmentation and spreading of the Mg17Al12 phase, while the addition of SiC had only a slight effect on the microstructure in the as-cast state. Surfaces of AZ31-SiC composites were analyzed using scanning electron microscopy. A study identified the AZ31-SiC composite as a unique material for applications involving a high compressive strength, such as those found in the aviation and automobile
Thillikkani, S.Kumar, N. MathanFrancis Luther King, M.Soundararajan, R.Kannan, S.
In a complex and ever-changing environment, achieving stable and precise SLAM (Simultaneous Localization and Mapping) presents a significant challenge. The existing SLAM algorithms often exhibit limitations in design that restrict their performance to specific scenarios; they are prone to failure under conditions of perceptual degradation. SLAM systems should maintain high robustness and accurate state estimation across various environments while minimizing the impact of noise, measurement errors, and external disturbances. This paper proposes a three-stage method for registering LiDAR point cloud. First, the multi-sensor factor graph is combined with historical pose and IMU pre-integration to provide a priori pose estimation; then a new method for extracting planar features is used to describe and filter the local features of the point cloud. Second, the normal distribution transform (NDT) algorithm is used as coarse registration. Third, the feature to feature registration is used for
Li, ZhichaoTong, PanpanShi, WeigangBi, Xin
Monitoring the safety and structural condition of tunnels is crucial for maintaining critical infrastructure. Traditional inspection methods are inefficient, labor-intensive, and pose safety risks. With its non-contact, high-precision, and high-efficiency features, mobile laser scanning technology has emerged as a vital tool for tunnel monitoring. This paper presents a mobile laser scanning system for tunnel measurement and examines techniques for calculating geometric parameters and processing high-resolution imaging data. Empirical evidence demonstrates that mobile laser scanning offers a reliable solution for evaluating and maintaining tunnel safety.
Lianbi, YaoZhang, KaikunDuan, WeiSun, Haili
Vehicle localization in enclosed environments, such as indoor parking lots, tunnels, and confined areas, presents significant challenges and has garnered considerable research interest. This paper proposes a localization technique based on an onboard binocular camera system, utilizing binocular ranging and spatial intersection algorithms to achieve active localization. The method involves pre-deploying reference points with known coordinates within the experimental space, using binocular ranging to measure the distance between the camera and the reference points, and applying the spatial intersection algorithm to calculate the camera’s center coordinates, thereby completing the localization process. Experimental results demonstrate that the proposed algorithm achieves sub-meter level localization accuracy. Localization accuracy is significantly influenced by the calibration precision of the binocular camera and the number of reference points. Higher calibration precision and a greater
Feifei, LiHaoping, QiYi, Wei
Technology for lane line semantic segmentation is crucial for ensuring the safe operation of intelligent cars. Intelligent cars can now comprehend the distribution and meaning of scenes in an image more precisely thanks to semantic segmentation, which calls for a certain degree of accuracy and real-time network performance. A lightweight module is selected, and two previous models are improved and fused to create the lane line detection model. Finally, experiments are conducted to confirm the model's efficacy. This paper proposes a lightweight replacement program with the aim of addressing the issue of large parameterization in the generative adversarial network (GAN) model and difficult training convergence. The overall network structure is selected from the Pix2Pix network in the conditional generative adversarial network, and the U-net network of the generator is cut and replaced by the Ghost Module, which consists of a modified downsampling module that enhances the global fusion
Yang, KunWang, Jian
Intelligent Structural Health Monitoring (SHM) of bridge is a technology that utilizes advanced sensor technology along with professional bridge engineering knowledge, coupled with machine vision and other intelligent methods for continuously monitoring and evaluating the status of bridge structures. One application of SHM technology for bridges by way of machine learning is in the use of damage detection and quantification. In this way, changes in bridge conditions can be analyzed efficiently and accurately, ensuring stable operational performance throughout the lifecycle of the bridge. However, in the field of damage detection, although machine vision can effectively identify and quantify existing damages, it still lacks accuracy for predicting future damage trends based on real-time data. Such shortfall l may lead to late addressing of potential safety hazards, causing accelerated damage development and threatening structural safety. To tackle this problem, this study designs a deep
Xu, WeidongCai, C.S.Xiong, WenZhu, Yanjie
This paper presents advanced intelligent monitoring methods aimed at enhancing the quality and durability of asphalt pavement construction. The study focuses on two critical tasks: foreign object detection and the uniform application of tack coat oil. For object recognition, the YOLOv5 algorithm is employed, which provides real-time detection capabilities essential for construction environments where timely decisions are crucial. A meticulously annotated dataset comprising 4,108 images, created with the LabelImg tool, ensures the accurate detection of foreign objects such as leaves and cigarette butts. By utilizing pre-trained weights during model training, the research achieved significant improvements in key performance metrics, including precision and recall rates. In addition to object detection, the study explores color space analysis through the HSV (Hue, Saturation, Value) model to effectively differentiate between coated and uncoated pavement areas following the application of
Hu, YufanFan, JianweiTang, FanlongMa, Tao
The modern-day vehicle’s driverless or driver-assisted systems are developed by sensing the surroundings using a combination of camera, lidar, and other related sensors by forming an accurate perception of the driving environment. Machine learning algorithms help in forming perception and perform planning and control of the vehicle. The control of the vehicle which reflects safety depends on the accurate understanding of the surroundings by the trained machine learning models by subdividing a camera image fed into multiple segments or objects. The semantic segmentation system comes with the objective of assigning predefined class labels such as tree, road, and the like to each pixel of an image. Any security attacks on pixel classification nodes of the segmentation systems based on deep learning result in the failure of the driver assistance or autonomous vehicle safety functionalities due to a falsely formed perception. The security compromisations on the pixel classification head of
Prashanth, K.Y.Rohitha , U.M.
Items per page:
1 – 50 of 9964