Browse Topic: Visibility

Items (849)
India has one of the highest accident rates in the world. Quite a few accidents have been attributed to poor driver visibility. Driver visibility is an important factor that can help mitigate the risk of accidents. The optimal visibility of in-vehicle controls is also essential for improving driver experience. Optimized driver visibility improves driving comfort and gives confidence to the driver, ensuring the safety of drivers and subsequently that of pedestrians. Driver visibility is an important consideration for vehicle occupant packaging and SAE has defined various standards and regulations for the same. These guidelines are defined considering American anthropometry, helping OEMs create global vehicles with uniform checkpoints. However, due to anthropometric differences, a need was felt to capture and analyze Indian-specific eyellipse and eye points. To measure the eye point of the user in a controlled environment, the interiors of a passenger vehicle were simulated using a
P H, SalmanKalra, PreritaRawat, AshishSharma, DeepakSingh, Ashwinder
Camera matching photogrammetry is widely used in the field of accident reconstruction for mapping accident scenes, modeling vehicle damage from post collision photographs, analyzing sight lines, and video tracking. A critical aspect of camera matching photogrammetry is determining the focal length and Field of View (FOV) of the photograph being analyzed. The intent of this research is to analyze the accuracy of the metadata reported focal length and FOV. The FOV from photographs captured by over 20 different cameras of various makes, models, sensor sizes, and focal lengths will be measured using a controlled and repeatable testing methodology. The difference in measured FOV versus reported FOV will be presented and analyzed. This research will provide analysts with a dataset showing the possible error in metadata reported FOV. Analysts should consider the metadata reported FOV as a starting point for photogrammetric analysis and understand that the FOV calculated from the image
Smith, Connor A.Erickson, MichaelHashemian, Alireza
This study outlines a camera-based perspective transformation method for measuring driver direct visibility, which produces 360-degree view maps of the nearest visible ground points. This method is ideal for field data collection due to its portability and minimal space requirements. Compared with ground truth assessments using a physical grid, this method was found to have a high level of accuracy, with all points in the vehicle front varying less than 0.30 m and varying less than 0.6 m for the A- and B-pillars. Points out of the rear window varied up to 2.4 m and were highly sensitive to differences in the chosen pixel due to their greater distance from the camera. Repeatability through trials of multiple measurements per vehicle and reproducibility through measures from multiple data collectors produced highly similar results, with the greatest variations ranging from 0.19 to 1.38 m. Additionally, three different camera lenses were evaluated, resulting in comparable results within
Mueller, BeckyBragg, HadenBird, Teddy
Abstract This paper introduces a method to solve the instantaneous speed and acceleration of a vehicle from one or more sources of video evidence by using optimization to determine the best fit speed profile that tracks the measured path of a vehicle through a scene. Mathematical optimization is the process of seeking the variables that drive an objective function to some optimal value, usually a minimum, subject to constraints on the variables. In the video analysis problem, the analyst is seeking a speed profile that tracks measured vehicle positions over time. Measured positions and observations in the video constrain the vehicle’s motion and can be used to determine the vehicle’s instantaneous speed and acceleration. The variables are the vehicle’s initial speed and an unknown number of periods of approximately constant acceleration. Optimization can be used to determine the speed profile that minimizes the total error between the vehicle’s calculated distance traveled at each
Snyder, SeanCallahan, MichaelWilhelm, ChristopherJohnk, ChrisLowi, AlvinBretting, Gerald
In the Baja race, off-road vehicles need to run under a variety of real and complex off-road conditions such as pebble road, shell pit, stone bad road, hump, water puddle, etc. In the process of this high-intensity and high-concentration race, the unoptimized design of the cab in ergonomics will easily cause the driver's visual and handling fatigue, so that the driver's attention is not concentrated. Cause the occurrence of security accidents. Moreover, lower back pain, sciatic nerve discomfort, lumbar spine diseases and other occupational diseases are basically caused by uncomfortable driving posture and unreasonable control matching, and these have a lot to do with unreasonable ergonomic design. In order to solve these problems, firstly establish the human body model of the driver, and then build the BSC racing car model by using 3D modeling software Catia. Then use the ergonomics simulation software Jack to analyze the visibility, accessibility and comfort. Based on the simulation
Liu, YuzhouLiu, Silang
Off-road vehicles are required to traverse a variety of pavement environments, including asphalt roads, dirt roads, sandy terrains, snowy landscapes, rocky paths, brick roads, and gravel roads, over extended periods while maintaining stable motion. Consequently, the precise identification of pavement types, road unevenness, and other environmental information is crucial for intelligent decision-making and planning, as well as for assessing traversability risks in the autonomous driving functions of off-road vehicles. Compared to traditional perception solutions such as LiDAR and monocular cameras, stereo vision offers advantages like a simple structure, wide field of view, and robust spatial perception. However, its accuracy and computational cost in estimating complex off-road terrain environments still require further optimization. To address this challenge, this paper proposes a terrain environment estimating method for off-road vehicle anticipated driving area based on stereo
Zhao, JianZhang, XutongHou, JieChen, ZhigangZheng, WenboGao, ShangZhu, BingChen, Zhicheng
Videos from cameras onboard a moving vehicle are increasingly available to collision reconstructionists. The goal of this study was to evaluate the accuracy of speeds, decelerations, and brake onset times calculated from onboard dash cameras (“dashcams”) using a match-moving technique. We equipped a single test vehicle with 5 commercially available dashcams, a 5th wheel, and a brake pedal switch to synchronize the cameras and 5th wheel. The 5th wheel data served as the reference for the vehicle kinematics. We conducted 9 tests involving a constant-speed approach (mean ± standard deviation = 57.6 ± 2.0 km/h) followed by hard braking (0.989 g ± 0.021 g). For each camera and brake test, we extracted the video and calculated the camera’s position in each frame using SynthEyes, a 3D motion tracking and video analysis program. Scale and location for the analyses were based on a 3D laser scan of the test site. From each camera’s position data, we calculated its speed before braking and its
Flynn, ThomasAhrens, MatthewYoung, ColeSiegmund, Gunter P.
Background. In 2022, vulnerable road user (VRU) deaths in the United States increased to their highest level in more than 40 years. At the same time, increasing vehicle size and taller front ends may contribute to larger forward blind zones, but little is known about the role that visual occlusion may play in this trend. Goal. Researchers measured the blind zones of six top-selling light-duty vehicle models (one pickup truck, three SUVs, and two passenger cars) across multiple redesign cycles (1997–2023) to determine whether the blind zones were getting larger. Method. To quantify the blind zones, the markerless method developed by the Insurance Institute for Highway Safety was used to calculate the occluded and visible areas at ground level in the forward 180° arc around the driver at ranges of 10 m and 20 m. Results. In the 10-m forward radius nearest the vehicle, outward visibility declined in all six vehicle models measured across time. The SUV models showed up to a 58% reduction
Epstein, Alexander K.Brodeur, AlyssaDrake, JuwonEnglin, EricFisher, Donald L.Zoepf, StephenMueller, Becky C.Bragg, Haden
Secondary crashes, including struck-by incidents are a leading cause of line-of-duty deaths among emergency responders, such as firefighters, law enforcement officers, and emergency medical service providers. The introduction of light-emitting diode (LED) sources and advanced lighting control systems provides a wide range of options for emergency lighting configurations. This study investigated the impact of lighting color, intensity, modulation, and flash rate on driver behavior while traversing a traffic incident scene at night. The impact of retroreflective chevron markings in combination with lighting configurations, as well as the measurement of “moth-to-flame” effects of emergency lighting on drivers was also investigated. This human factors study recruited volunteers to drive a closed course traffic incident scene, at night under various experimental conditions. The simulated traffic incident was designed to replicate a fire apparatus in the center-block position. The incident
D. Bullough, JohnParr, ScottHiebner, EmilySblendorio, Alec
To improve the accuracy and reliability of short-term prediction of highway visibility level in key scenarios such as short duration and fast changing speed, this paper proposes a short-term prediction method for highway visibility level based on attention mechanism LSTM. Firstly, XGBoost and SHAP methods are used to analyze the factors affecting highway visibility, determine the importance ranking of different influencing factors, and select the factors that have a greater impact on visibility as inputs for the visibility level prediction model. Secondly, based on LSTM as the model foundation network and innovative coupling attention mechanism, a visibility level prediction model based on attention mechanism LSTM is constructed, which can dynamically update the correlation between meteorological feature information at each historical time point and the visibility level at the current prediction time, thereby dividing the importance of information and flexibly capturing important
Ding, ShanshanXiong, ZhuozhiHuang, XuLi, Yurong
This research aimed to explore the integration of Virtual reality technology in ergonomically testing automotive interior designs. This objective was aimed at ensuring that such technology could be used to ameliorate user comfort through controlled simulations. Existing ergonomic testing methods are often limited when it comes to recreating actual driving situations and quickly repeating design improvements. VR could be used as a solution because its ergonomically tested simulation can be used to provide users with the real experience of driving. The users can be observed while they experience it and asked for their feedback. For this research, an interactive VR environment imitating a 10-minute-long trip through traffic and changing road conditions was created. It was populated by ten users, concatenated equally in men and women, both aged 20-35, representing approximate demographics of workers in the automotive production industry. Participants of the research were asked to use
Natrayan, L.Kaliappan, SeeniappanSwamy Nadh, V.Maranan, RamyaBalaji, V.
Visual perception systems for autonomous vehicles are exposed to a wide variety of complex weather conditions, among which rainfall is one of the weather conditions with high exposure. Therefore, it is necessary to construct a model that can efficiently generate a large number of images with different rainfall intensities to help test the visual perception system under rainfall conditions. However, the existing datasets either do not contain multilevel rainfall or are synthetic images. It is difficult to support the construction of the model. In this paper, the natural rainfall images of different rainfall intensities were first collected and produced a natural multilevel rain dataset. The dataset includes no rain and three levels (light, medium and heavy) of rainfall with the number of 629, 210, 248 and 193 respectively, totaling 1280 images. The dataset is open source and available online via: https://github.com/raydison/natural-multilevel-rain-dataset-NMRD. Subsequently, a
Liu, ZhenyuanJia, TongXing, XingyuWu, JianfengChen, Junyi
Letter from the Guest Editors
van Schijndel, MargrietSciarretta, AntonioOp den Camp, OlafKrosse, Bastiaan
This SAE Recommended Practice establishes three alternate methods for describing and evaluating the truck driver's viewing environment: the Target Evaluation, the Polar Plot and the Horizontal Planar Projection. The Target Evaluation describes the field of view volume around a vehicle, allowing for ray projections, or other geometrically accurate simulations, that demonstrate areas visible or non-visible to the driver. The Target Evaluation method may also be conducted manually, with appropriate physical layouts, in lieu of CAD methods. The Polar Plot presents the entire available field of view in an angular format, onto which items of interest may be plotted, whereas the Horizontal Planar Projection presents the field of view at a given elevation chosen for evaluation. These methods are based on the Three Dimensional Reference System described in SAE J182a. This document relates to the driver's exterior visibility environment and was developed for the heavy truck industry (Class B
Truck and Bus Human Factors Committee
Driving at night presents a myriad of challenges, with one of the most significant being visibility, especially on curved roads. Despite the fact that only a quarter of driving occurs at night, research indicates that over half of driving accidents happen during this period. This alarming statistic underscores the urgent need for improved illumination solutions, particularly on curved roads, to enhance driver visibility and consequently, safety. Conventional headlamp systems, while effective in many scenarios, often fall short in adequately illuminating curved roads, thereby exacerbating the risk of accidents during nighttime driving. In response to this critical issue, considerable efforts have been directed towards the development of alternative technologies, chief among them being Adaptive Front Lighting Systems (AFS). The primary objective of this endeavor is to design and construct a prototype AFS that can seamlessly integrate into existing fixed headlamp systems. Throughout the
T, KarthiG, ManikandanP C, MuruganS, SakthivelN, VinuP, Dineshkumar
Sensata Technologies' booth at this year's IAA Transportation tradeshow included two of the company's Precor radar sensors. The PreView STA79 is a heavy-duty vehicle side-monitoring system launched in May 2024 and designed to comply with Europe-wide blind spot monitoring legislation introduced in June 2024. The PreView Sentry 79 is a front- and rear-monitoring system. Both systems operate on the 79-GHz band as the nomenclature suggests. PreView STA79 can cover up to three vehicle zones: a configurable center zone, which can monitor the length of the vehicle, and two further zones that can be independently set to align with individual customer needs. The system offers a 180-degree field of view to eliminate blind spots along the vehicle sides and a built-in measurement unit that will increase the alert level when turning toward an object even when the turn indicator is not used. The system also features trailer mitigation to reduce false positive alerts on the trailer when turning. The
Kendall, John
The scope of this SAE Aerospace Information Report (AIR) is to discuss factors affecting visibility of aircraft navigation and anticollision lights, enabling those concerned with their use to have a better technical understanding of such factors, and to aid in exercising appropriate judgment in the many possible flight eventualities.
A-20B Exterior Lighting Committee
This study aims to elucidate the impact of A-pillar blind spots on drivers’ visibility of pedestrians during left and right turns at an intersection. An experiment was conducted using a sedan and a truck, with a professional test driver participating. The driver was instructed to maintain sole focus on a designated pedestrian model from the moment it was first sighted during each drive. The experimental results revealed how the blind spots caused by A-pillars occur and clarified the relationship between the pedestrian visible trajectory distance and specific vehicle windows. The results indicated that the shortest trajectory distance over which a pedestrian remained visible in the sedan was 17.6 m for a far-side pedestrian model during a right turn, where visibility was exclusively through the windshield. For the truck, this distance was 20.9 m for a near-side pedestrian model during a left turn, with visibility through the windshield of 9.5 m (45.5% of 20.9 m) and through the
Matsui, YasuhiroOikawa, Shoko
Most humans rely heavily on our visual abilities to function in the world—we are optically oriented. In the broadest sense, “optics” refers to the study of sight and light. At its foundation, Radiant’s business is all about optics: measuring light and the properties of light in relation to the human eye. Photometry is the science of light according to our visual perception. Colorimetry is the science of color: how our eyes interpret different wavelengths of light.
Deep learning algorithms are being widely used in autonomous driving (AD) and advanced driver assistance systems (ADAS) due to their impressive capabilities in visual perception of the environment of a car. However, the reliability of these algorithms is known to be challenging due to their data-driven and black-box nature. This holds especially true when it comes to accurate and reliable perception of objects in edge case scenarios. So far, the focus has been on normal driving situations and there is little research on evaluating these systems in a safety-critical context like pre-crash scenarios. This article describes a project that addresses this problem and provides a publicly available dataset along with key performance indicators (KPIs) for evaluating visual perception systems under pre-crash conditions.
Bakker, Jörg
A total of 93 tests were conducted in daytime conditions to evaluate the effect on the Time to Collision (TTC), emergency braking, and avoidance rates of the Forward Collision Warning (FCW) and Automatic Emergency Braking (AEB) provided by a 2022 Tesla Model 3 against a 4ActivePA adult static pedestrian target. Variables that were evaluated included the vehicle speed on approach, pedestrian offsets, pedestrian clothing, and user-selected FCW settings. As a part of the Tesla’s Collision Avoidance AssistTM, these user-selected FCW settings change the timing of the issuance of the visual and/or audible warning provided. This testing evaluated the Tesla at speeds of 25 and 35 miles per hour (mph) versus a stationary pedestrian target in early, medium, and late FCW settings. Testing was also conducted with a 50% pedestrian offset and 75% offset conditions relative to the right side of the Tesla. The pedestrian target was clothed with and without a reflective safety vest to account for
Harrington, ShawnNagarajan, Sundar RamanLau, James
ISO 26262-1:2018 defines the fault tolerant time interval (FTTI) as the minimum time span from the occurrence of a fault within an electrical / electronic system to a possible occurrence of a hazardous event. FTTI provides a time limit within which compliant vehicle safety mechanisms must detect and react to faults capable of posing risk of harm to persons. This makes FTTI a vital safety characteristic for system design. Common automotive industry practice accommodates recording fault times of occurrence definitively. However, current practice for defining the time of hazardous event onset relies upon subjective judgements. This paper presents a novel method to define hazardous event onset more objectively. The method introduces the Streetscope Collision Hazard Measure (SHMTM) and a refined approach to hazardous event classification. SHM inputs kinematic factors such as proximity, relative speed, and acceleration as well as environmental characteristics like traffic patterns
Jones, DarrenGangadhar, PavankumarMcGrail, RandallPati, SudiptaAntonsson, ErikPatel, Ravi
The prediction of agents' future trajectory is a crucial task in supporting advanced driver-assistance systems (ADAS) and plays a vital role in ensuring safe decisions for autonomous driving (AD). Currently, prevailing trajectory prediction methods heavily rely on high-definition maps (HD maps) as a source of prior knowledge. While HD maps enhance the accuracy of trajectory prediction by providing information about the surrounding environment, their widespread use is limited due to their high cost and legal restrictions. Furthermore, due to object occlusion, limited field of view, and other factors, the historical trajectory of the target agent is often incomplete This limitation significantly reduces the accuracy of trajectory prediction. Therefore, this paper proposes ETSA-Pred, a mapless trajectory prediction model that incorporates enhanced temporal modeling and spatial self-attention. The novel enhanced temporal modeling is based on neural controlled differential equations (NCDEs
Wei, ZhaoWu, Xiaodong
Ergonomics plays an important role in automobile design to achieve optimal compatibility between occupants and vehicle components. The overall goal is to ensure that the vehicle design accommodates the target customer group, who come in varied sizes, preferences and tastes. Headroom is one such metric that not only influences accommodation rate but also conveys a visual perception on how spacious the vehicle is. An adequate headroom is necessary for a good seating comfort and a relaxed driving experience. Headroom is intensely discussed in magazine tests and one of the key deciding factors in purchasing a car. SAE J1100 defines a set of measurements and standard procedures for motor vehicle dimensions. H61, W27, W35, H35 and W38 are some of the standard dimensions that relate to headroom and head clearances. While developing the vehicle architecture in the early design phase, it is customary to specify targets for various ergonomic attributes and arrive at the above-mentioned
Rajakumaran, SriramS, RahulVasireddy, Rakesh MitraNair, Suhas
Temporal light modulation (TLM), colloquially known as “flicker,” is an issue in almost all lighting applications, due to widespread adoption of LED and OLED sources and their driving electronics. A subset of LED/OLED lighting systems delivers problematic TLM, often in specific types of residential, commercial, outdoor, and vehicular lighting. Dashboard displays, touchscreens, marker lights, taillights, daytime running lights (DRL), interior lighting, etc. frequently use pulse width modulation (PWM) circuits to achieve different luminances for different times of day and users’ visual adaptation levels. The resulting TLM waveforms and viewing conditions can result in distraction and disorientation, nausea, cognitive effects, and serious health consequences in some populations, occurring with or without the driver, passenger, or pedestrian consciously “seeing” the flicker. There are three visual responses to TLM: direct flicker, the stroboscopic effect, and phantom array effect (also
Miller, NaomiIrvin, Lia
For safe driving function, signs must be visible. Sign visibility is function of its luminance intensity. During day, due to ambient light conditions sign luminance is not a major concern. But during night, due to absence of sun light sign board retro-reflectivity plays a crucial role in sign visibility. The vehicle headlamp color, beam pattern, lamp installation position, the relative seating position of driver and moon light conditions are important factors. Virtual simulation approach is used for analyzing the sign board visibility. Among various factors for example the headlamp installation position from ground, distance between two lamps and eye position of driver are considered for analyzing the sign board visibility in this paper. Many automotive organizations have widely varying requirements and established testing guidelines to ensure visibility of signs in head lamp physical testing but there are no guidelines during design stage for headlamp for sign visibility. In this
Yadav, Prashant Maruti
SLAM (Simultaneous Localization and Mapping) plays a key role in autonomous driving. Recently, 4D Radar has attracted widespread attention because it breaks through the limitations of 3D millimeter wave radar and can simultaneously detect the distance, velocity, horizontal azimuth and elevation azimuth of the target with high resolution. However, there are few studies on 4D Radar in SLAM. In this paper, RI-FGO, a 4D Radar-Inertial SLAM method based on Factor Graph Optimization, is proposed. The RANSAC (Random Sample Consensus) method is used to eliminate the dynamic obstacle points from a single scan, and the ego-motion velocity is estimated from the static point cloud. A 4D Radar velocity factor is constructed in GTSAM to receive the estimated velocity in a single scan as a measurement and directly integrated into the factor graph. The 4D Radar point clouds of consecutive frames are matched as the odometry factor. A modified scan context method, which is more suitable for 4D Radar’s
Zihang, HeXiong, LuZhuo, GuirongGAO, LetianLu, ShouyiZhu, JiaqiLeng, Bo
The improvement of vehicle soiling behavior has increasing interest over the past few years not only to satisfy customer requirements and ensure a good visibility of the surrounding traffic but also for autonomous vehicles, for which soiling investigation and improvement are even more important due to the demands of the cleanliness and induced functionality of the corresponding sensors. The main task is the improvement of the soiling behavior, i.e., reduction or even prevention of soiling of specific surfaces, for example, windows, mirrors, and sensors. This is mostly done in late stages of vehicle development and performed by experiments, e.g., wind tunnel tests, which are supplemented by simulation at an early development stage. Among other sources, the foreign soiling on the side mirror and the side window depend on the droplet detaching from the side mirror housing. That is why a good understanding of the droplet formation process and the resulting droplet diameters behind the side
Kille, LukasStrohbücker, VeithNiesner, ReinholdSommer, OliverWozniak, Günter
This article presents a novel approach to optimize the placement of light detection and ranging (LiDAR) sensors in autonomous driving vehicles using machine learning. As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception. The proposed method employs the deep deterministic policy gradient (DDPG) algorithm, which takes the vehicle’s surface geometry as input and generates optimized 3D sensor positions with predicted high visibility. Through extensive experiments on various vehicle shapes and a rectangular cuboid, the effectiveness and adaptability of the proposed method are demonstrated. Importantly, the trained network can efficiently evaluate new vehicle shapes without the need for re-optimization, representing a significant improvement over classical methods such as genetic algorithms. By leveraging machine learning techniques, this research streamlines the sensor placement optimization process
Berens, FelixAmbs, JordanElser, StefanReischl, Markus
The recent progress in camera-based technologies has prompted the development of prototype camera-based video systems, intended to replace conventional passenger vehicle mirrors. Given that a significant number of collisions during lane changes stem from drivers being unaware of nearby vehicles, these camera-based systems offer the potential to enhance safety. By affording drivers a broader field of view, they facilitate the detection of potential conflicts. This project was focused on analyzing naturalistic driving data in support of the Federal Motor Vehicle Safety Standard 111 regulatory endeavors. The goal was to assess the effectiveness and safety compatibility of prototype camera-based side-view systems as potential replacements for traditional side-view mirrors. The method employed involved extracting radar data from instances of lane changes conducted by 12 drivers for two pick-up trucks includes 10018 signal-indicated lane changes performed at speeds consistent with highway
Guduri, BalachandarLlaneras, Robert
The windscreen wiping system is mandatory requirement for automotive vehicle as per Central motor vehicle rules (CMVR). The main scope of the standard is to ensure vision zones to be wiped by wiping system to ensure maximum field of vision to the driver. The evaluation of vision zones as per IS 15802:2008 is generally determined by virtual simulation by OEMs. The limitation of virtual simulation is due to actual tolerances in vehicle, due to seat fitment, ergonomic dimensions, seat cushioning effect and wiper non-effective operation which are not taken into consideration very well off. The testing methodology described in the paper is an in-house developed test method based on SAE recommended practices. With the help of 3D H-point machine and a laser based ‘Theodolite’ equipped with horizontal and vertical angle projections from single pivot point is used to develop various vision zones on an actual vehicle windscreen as per technical data. These zones are later compared with wiped
Joshi, AmolPatil, AmolDoshi, AnupNikam, ShashankBelavadi Venkataramaiah, Shamsundara
Automated driving system is a multi-source sensor data fusion system. However different type sensor has different operating frequencies, different field of view, different detection capabilities and different sensor data transition delay. Aiming at these problems, this paper introduces the processing mechanism of out of sequence measurement data into the multi-target detection and tracking system based on millimeter wave radar and camera. After the comparison of ablation experiments, the longitudinal and lateral tracking performance of the fusion system is improved in different distance ranges.
Li, Fu-XiangZhu, Yuan
This SAE Recommended Practice defines key terms used in the description and analysis of video based driver eye glance behavior, as well as guidance in the analysis of that data. The information provided in this practiced is intended to provide consistency for terms, definitions, and analysis techniques. This practice is to be used in laboratory, driving simulator, and on-road evaluations of how people drive, with particular emphasis on evaluating Driver Vehicle Interfaces (DVIs; e.g., in-vehicle multimedia systems, controls and displays). In terms of how such data are reduced, this version only concerns manual video-based techniques. However, even in its current form, the practice should be useful for describing the performance of automated sensors (eye trackers) and automated reduction (computer vision).
null, null
Parking an articulated vehicle is a challenging task that requires skill, experience, and visibility from the driver. An automatic parking system for articulated vehicles can make this task easier and more efficient. This article proposes a novel method that finds an optimal path and controls the vehicle with an innovative method while considering its kinematics and environmental constraints and attempts to mathematically explain the behavior of a driver who can perform a complex scenario, called the articulated vehicle park maneuver, without falling into the jackknifing phenomena. In other words, the proposed method models how drivers park articulated vehicles in difficult situations, using different sub-scenarios and mathematical models. It also uses soft computing methods: the ANFIS-FCM, because this method has proven to be a powerful tool for managing uncertain and incomplete data in learning and inference tasks, such as learning from simulations, handling uncertainty, and
Rezaei Nedamani, HamidrezaSoleymanifard, MostafaSafaeifar, AliKhiabani, Parisa Masnadi
A new spatial calibration procedure has been introduced for infrared optical systems developed for cases where camera systems are required to be focused at distances beyond 100 meters. Army Combat Capabilities Development Command Armaments Center, Picatinny Arsenal, NJ All commercially available camera systems have lenses (and internal geometries) that cannot perfectly refract light waves and refocus them onto a two-dimensional (2D) image sensor. This means that all digital images contain elements of distortion and thus are not a true representation of the real world. Expensive high-fidelity lenses may have little measurable distortion, but if sufficient distortion is present, it will adversely affect photogrammetric measurements made from the images produced by these systems. This is true regardless of the type of camera system, whether it be a daylight camera, infrared (IR) camera, or camera sensitive to another part of the electromagnetic spectrum. The most common examples of large
Blind spots created by the driver-side B-pillar impair the ability of the driver to assess their surroundings accurately, significantly contributing to the frequency and severity of vehicular accidents. Vehicle manufacturers cannot readily eliminate the B-pillar due to regulatory guidelines intended to protect vehicular occupants in the event of side collisions and rollover incidents. Furthermore, assistance implements utilized to counteract the adverse effects of blind spots remain ineffective due to technological limitations and optical impediments. This paper introduces mechanisms to quantify the obstruction caused by the B-pillar when the head of the driver is facing forward and turning 90°, typical of an over-the-shoulder blind spot check. It uses the metrics developed to demonstrate the relationship between B-pillar width and the obstruction angle. The paper then creates a methodology to determine the movement required of the driver to eliminate blind spots. Ultimately, this
Baysal, Dilara N.
A critical first step for a robot navigating an obstacle field is to plan a collision-free path through the environment. Historically, solutions for path planning largely use grid-based search methods particularly when guarantees are required that do not permit randomization-based methods. In large operational domains, gridding the search environment necessitates significant memory overhead and corresponding performance loss. To avoid gridded maps, grid-free path planners can achieve significant benefits to performance and memory overhead. These methods utilize visibility graphs with edge costs rather than grids with cell weights to represent possible path choices. This work presents methods to extend known 2D grid-free static environment path planners into higher dimensions to use these same planners for dynamic obstacle path planning via timespace representations. Such extensions to include time trajectories into the visibility graph readily admit path planning through highly dynamic
Harnett, Stephen J.Brennan, SeanPangborn, Herschel C.Pentzer, JesseReichard, Karl
Thermal control coatings, i.e. coatings with different visible versus infrared emission, have been used by NASA on the Orbiter and Hubble Telescope to reflect sunlight, while allowing heat rejection via infrared emission. However, these coatings absorb at least 6 percent of the Sun’s irradiant power, limiting the minimum temperature that can be reached to about 200 K. NASA needs better solar reflectors to keep cryogenic fuel and oxidizers cold enough to be maintained passively in deep space for future missions.
Digital shearography has many advantages, such as full-field, non-contact, high sensitivity, and good robustness. It was widely used to measure the deformation and strain of materials, also to the application of nondestructive testing (NDT). However, most digital sherography applications can only work in one field of view per measurement, and some small defects may not be detected as a result. Multiple measurements of different fields of view are needed to solve this issue, which will increase the measurement time and cost. The difficulty in performing multiple measurements may also increase for cases where the loading is not repeatable. Therefore, a system capable of measuring dual fields of view at the same time is necessary. The carrier frequency spatial phase shift method may be a good candidate to reach this goal because it can simultaneously record phase information of multiple images, e.g. two speckle interferograms with different fields of view. It then obtains the phase
Zheng, XiaowanGuo, BichengFang, SiyuanSia, BernardYang, Lianxiang
Advanced driver assistance systems rely on external sensors that encompass the vehicle. The reliability of such systems can be compromised by adverse weather, with performance hindered by both direct impingement on sensors and spray suspended between the vehicle and potential obstacles. The transportation of road spray is known to be an unsteady phenomenon, driven by the turbulent structures that characterise automotive flow fields. Further understanding of this unsteadiness is a key aspect in the development of robust sensor implementations. This paper outlines an experimental method used to analyse the spray ejected by an automotive body, presented through a study of a simplified vehicle model with interchangeable rear-end geometries. Particles are illuminated by laser light sheets as they pass through measurement planes downstream of the vehicle, facilitating imaging of the instantaneous structure of the spray. The tested configurations produce minor changes to the flow field, the
Crickmore, Conor JamesGarmory, AndrewButcher, Daniel
With the proliferation of ADAS and autonomous systems, the quality and quantity of the data to be used by vehicles has become crucial. In-vehicle sensors are evolving, but their usability is limited to their field of view and detection distance. V2X communication systems solve these issues by creating a cooperative perception domain amongst road users and the infrastructure by communicating accurate, real-time information. In this paper, we propose a novel Consolidated Object Data Service (CODS) for multi-Radio Access Technology (RAT) V2X communication. This service collects information using BSM packets from the vehicular network and perception information from infrastructure-based sensors. The service then fuses the collected data, offering the communication participants with a consolidated, deduplicated, and accurate object database. Since fusing the objects is resource intensive, this service can save in-vehicle computation costs. The combination of diverse input sources improves
Wippelhauser, AndrásChand, ArpitaDatta Gupta, SomakVaradi, Andras
Modern vehicles use automated driving assistance systems (ADAS) products to automate certain aspects of driving, which improves operational safety. In the U.S. in 2020, 38,824 fatalities occurred due to automotive accidents, and typically about 25% of these are associated with inclement weather. ADAS features have been shown to reduce potential collisions by up to 21%, thus reducing overall accidents. But ADAS typically utilize camera sensors that rely on lane visibility and the absence of obstructions in order to function, rendering them ineffective in inclement weather. To address this research gap, we propose a new technique to estimate snow coverage so that existing and new ADAS features can be used during inclement weather. In this study, we use a single camera sensor and historical weather data to estimate snow coverage on the road. Camera data was collected over 6 miles of arterial roadways in Kalamazoo, MI. Additionally, infrastructure-based weather sensor visibility data from
Kadav, ParthGoberville, Nicholas APrins, KyleSiems-Anderson, AmandaWalker, CurtisMotallebiaraghi, FarhangCarow, KyleFanas Rojas, JohanHong, Guan YueAsher, Zachary
The towbarless aircraft taxiing system (TLATS) consists of the towbarless towing vehicle (TLTV) and the aircraft. The tractor realizes the towing work by fixing the nose wheel. During the towing process, the tractor driver may cause the aircraft to collide with an obstacle because of the blind spot of vision leading to the accident. The special characteristics of aircraft do not allow us to modify the structure of the aircraft to achieve collision avoidance. In this paper, three degrees of freedom (DOE) kinematic model of the tractor system is established for each of the two cases of pushing and pulling the aircraft, and the relationship between the coordinates of each danger point and the relatively articulated angle of the TLATS and the velocity of the midpoint of the rear axle is derived. Considering that there is an error between the velocity and relatively articulated angle measured by the sensor and the actual one, the effect of velocity and relatively articulated angle
Zhu, HengjiaXu, ZiShuoZhang, BaizhiZhang, Wei
As new headlight technologies begin to take hold in vehicular forward lighting systems and they become more commonplace on vehicles, new frameworks for evaluating the performance of these systems are being developed and promulgated. The objective of each of these systems is the same, namely, improving safety by ensuring that vehicle lighting provides sufficient visibility for drivers without negative impacts such as glare. Recent research has shown the direct link between improved driver visibility and reduced nighttime crashes. To the extent that headlight evaluation systems can be compared using visual performance modeling approaches, it should be possible to relate improved visibility from high-performing headlight systems to the potential for reduced nighttime crashes. In the present paper we demonstrate how visual performance modeling in conjunction with vehicle headlight evaluations can lead to predictions of improved safety and ultimately, beneficial economic impacts to society.
Bullough, John D.
Preclinical laboratories at academic facilities and contract research organizations (CROs) have traditionally relied on five main imaging modalities: optical, acoustic, x-ray, MRI, and nuclear. Now, photoacoustic imaging, which combines optical and acoustic modalities, is enabling some of the most promising medical research, including providing images of biological structures for increased visibility during surgery and facilitating the analysis of plaque composition to better diagnose and treat coronary artery disease (CAD).
Self-driving cars, like the human drivers that preceded them, need to see what’s around them to avoid obstacles and drive safely.
Items per page:
1 – 50 of 849