Browse Topic: Visibility

Items (840)
The larger size and expanded blind spots of heavy-duty trucks in comparison to passenger cars, create unique challenges for truck drivers navigating narrow roads, such as in urban scenarios. For this reason, the detection of free space around the vehicle is of critical importance, as it has the potential to save lives and reduce operating costs due to less maintenance and downtime. Despite the existence of numerous approaches to free space detection in the literature, few of these have been applied to the trucking sector, disregarding important aspects for these kinds of vehicles such as the altitude at which obstacles are located. This paper aims to present the initial results of our research, a “Not Free Space Warner”, a driving assistance function intended for implementation in series trucks. A methodology is followed to define the characteristics that the perception component of this function shall fulfill. To this end, an analysis of the most critical accidents and common driving
Martinez, CristianPeters, Steven
The video systems include a camera, display, and lights. Video is the recording, reproducing, or broadcasting of moving visual images as illustrated in Figure 1. A camera video imaging system is a system composed of a camera and a monitor, as well as other components, in which the monitor provides a real-time or near real-time visual image of the scene captured by the camera. Such systems are capable of providing remote views to the pilot and can therefore be used to provide improved visibility (for example, coverage of blind spots). In general, camera video systems may be used in the pilot’s work position for purposes of improving airplane and corresponding environmental visibility. Examples of aircraft video system applications include: Ground maneuver or taxi camera system Flight deck entry video surveillance system Cargo loading and unloading Cargo compartment livestock monitoring Monitoring systems that are used to track the external, internal, and security functions of an
A-20B Exterior Lighting Committee
Image dehazing techniques can play a vital role in object detection, surveillance, and accident prevention, especially in scenarios where visibility is compromised because of light scattering by atmospheric particles. To obtain a high-quality image or as an initial step in processing, it’s crucial to restore the scene’s information from a single image, given that this is an ill-posed inverse problem. The present approach utilized an unsupervised learning approach to predict the transmission map from a hazy image and used YOLOv8n to detect the car from a clear recovered image. The dehazing model utilized a lightweight parallel channel architecture to extract features from the input image and estimate the transmission map. The clear image is recovered using an atmospheric scattering model and given to the YOLOv8n for car detection. By incorporating dark channel prior loss during training, the model eliminates the need for a paired dataset. The proposed dehazing model with fewer
Dave, ChintanPatel, HetalKumar, Ahlad
The Science and Technology Directorate's (S&T) National Urban Security Technology Laboratory (NUSTL) recently brought together emergency responders from across the nation to test unmanned aircraft systems (UAS) from the Blue UAS Cleared List. By providing an aerial vantage point, and creating standoff distance between responders and potential threats, UAS can significantly mitigate safety risks to responders by allowing them to assess and monitor incidents remotely. U.S. Department of Homeland Security, Washington, D.C. In November 2024, the U.S. Department of Homeland Security's (DHS) National Urban Security Technology Laboratory (NUSTL) teamed up with Mississippi State University's (MSU) Raspet Flight Research Laboratory, and DAGER Technology LLC, to conduct an assessment on selected models of cybersecure “Blue UAS.” The drones, including models from Ascent AeroSystems, Freefly Systems, Parrot Drones, Skydio, and Teal Drones, are cybersecure and commercially available to assist
This paper introduces a method to solve the instantaneous speed and acceleration of a vehicle from one or more sources of video evidence by using optimization to determine the best fit speed profile that tracks the measured path of a vehicle through a scene. Mathematical optimization is the process of seeking the variables that drive an objective function to some optimal value, usually a minimum, subject to constraints on the variables. In the video analysis problem, the analyst is seeking a speed profile that tracks measured vehicle positions over time. Measured positions and observations in the video constrain the vehicle’s motion and can be used to determine the vehicle’s instantaneous speed and acceleration. The variables are the vehicle’s initial speed and an unknown number of periods of approximately constant acceleration. Optimization can be used to determine the speed profile that minimizes the total error between the vehicle’s calculated distance traveled at each measured
Snyder, SeanCallahan, MichaelWilhelm, ChristopherJohnk, ChrisLowi, AlvinBretting, Gerald
Headlight glare remains a persistent problem to the U.S. driving public. Over the past 30 years, vehicle forward lighting and signaling systems have evolved dramatically in terms of styling and lighting technologies used. Importantly, vehicles driven in the U.S. have increased in size during this time as the proportion of pickup trucks and sport-utility vehicles (SUVs) has increased relative to passenger sedans and other lower-height vehicles. Accordingly, estimates of typical driver eye height and the height of lighting and signaling equipment on vehicles from one or two decades ago are unlikely to represent the characteristics of current vehicles in the U.S. automotive market. In the present study we surveyed the most popular vehicles sold in the U.S. and carried out evaluations of the heights of lighting and signaling systems, as well as typical driver eye heights based on male and female drivers. These data may be of use to those interested in understanding how exposure to vehicle
Bullough, John D.
Videos from cameras onboard a moving vehicle are increasingly available to collision reconstructionists. The goal of this study was to evaluate the accuracy of speeds, decelerations, and brake onset times calculated from onboard dash cameras (“dashcams”) using a match-moving technique. We equipped a single test vehicle with 5 commercially available dashcams, a 5th wheel, and a brake pedal switch to synchronize the cameras and 5th wheel. The 5th wheel data served as the reference for the vehicle kinematics. We conducted 9 tests involving a constant-speed approach (mean ± standard deviation = 57.6 ± 2.0 km/h) followed by hard braking (0.989 g ± 0.021 g). For each camera and brake test, we extracted the video and calculated the camera’s position in each frame using SynthEyes, a 3D motion tracking and video analysis program. Scale and location for the analyses were based on a 3D laser scan of the test site. From each camera’s position data, we calculated its speed before braking and its
Flynn, ThomasAhrens, MatthewYoung, ColeSiegmund, Gunter P.
Headliners are one of the largest components inside an automobile, stretching from the front windshield to the rear windshield. Besides its aesthetic purpose, it contributes to multiple other purposes like housing different components, helps in NVH, defines the interior roominess, and plays a crucial role in defining the deployment of curtain airbag. The headliner also plays a role in meeting regulatory requirements like upward visibility and headroom requirements of the occupants. During the deployment of curtain airbag, it is important that the headliner-pillar interface aids in the easy opening of airbag, with the least hindrance. This is defined by multiple factors like the location of headliner-pillar interface, its distance from the airbag ramp bracket, the position of the inflator, the mountings of the headliner and pillar trims, to name a few. Also, during the deployment of the airbag, it is important that parts such as grabhandle, speaker grilles, etc which are fitted on the
Sabesan, Arvind KochiD., AnanthaKakani, Phani Kumar
This study outlines a camera-based perspective transformation method for measuring driver direct visibility, which produces 360-degree view maps of the nearest visible ground points. This method is ideal for field data collection due to its portability and minimal space requirements. Compared with ground truth assessments using a physical grid, this method was found to have a high level of accuracy, with all points in the vehicle front varying less than 0.30 m and varying less than 0.6 m for the A- and B-pillars. Points out of the rear window varied up to 2.4 m and were highly sensitive to differences in the chosen pixel due to their greater distance from the camera. Repeatability through trials of multiple measurements per vehicle and reproducibility through measures from multiple data collectors produced highly similar results, with the greatest variations ranging from 0.19 to 1.38 m. Additionally, three different camera lenses were evaluated, resulting in comparable results within
Mueller, BeckyBragg, HadenBird, Teddy
India has one of the highest accident rates in the world. Quite a few accidents have been attributed to poor driver visibility. Driver visibility is an important factor that can help mitigate the risk of accidents. The optimal visibility of in-vehicle controls is also essential for improving driver experience. Optimized driver visibility improves driving comfort and gives confidence to the driver, ensuring the safety of drivers and subsequently that of pedestrians. Driver visibility is an important consideration for vehicle occupant packaging and SAE has defined various standards and regulations for the same. These guidelines are defined considering American anthropometry, helping OEMs create global vehicles with uniform checkpoints. However, due to anthropometric differences, a need was felt to capture and analyze Indian-specific eyellipse and eye points. To measure the eye point of the user in a controlled environment, the interiors of a passenger vehicle were simulated using a
P H, SalmanKalra, PreritaRawat, AshishSharma, DeepakSingh, Ashwinder
Camera matching photogrammetry is widely used in the field of accident reconstruction for mapping accident scenes, modeling vehicle damage from post collision photographs, analyzing sight lines, and video tracking. A critical aspect of camera matching photogrammetry is determining the focal length and Field of View (FOV) of the photograph being analyzed. The intent of this research is to analyze the accuracy of the metadata reported focal length and FOV. The FOV from photographs captured by over 20 different cameras of various makes, models, sensor sizes, and focal lengths will be measured using a controlled and repeatable testing methodology. The difference in measured FOV versus reported FOV will be presented and analyzed. This research will provide analysts with a dataset showing the possible error in metadata reported FOV. Analysts should consider the metadata reported FOV as a starting point for photogrammetric analysis and understand that the FOV calculated from the image
Smith, Connor A.Erickson, MichaelHashemian, Alireza
Off-road vehicles are required to traverse a variety of pavement environments, including asphalt roads, dirt roads, sandy terrains, snowy landscapes, rocky paths, brick roads, and gravel roads, over extended periods while maintaining stable motion. Consequently, the precise identification of pavement types, road unevenness, and other environmental information is crucial for intelligent decision-making and planning, as well as for assessing traversability risks in the autonomous driving functions of off-road vehicles. Compared to traditional perception solutions such as LiDAR and monocular cameras, stereo vision offers advantages like a simple structure, wide field of view, and robust spatial perception. However, its accuracy and computational cost in estimating complex off-road terrain environments still require further optimization. To address this challenge, this paper proposes a terrain environment estimating method for off-road vehicle anticipated driving area based on stereo
Zhao, JianZhang, XutongHou, JieChen, ZhigangZheng, WenboGao, ShangZhu, BingChen, Zhicheng
In the Baja race, off-road vehicles need to run under a variety of real and complex off-road conditions such as pebble road, shell pit, stone bad road, hump, water puddle, etc. In the process of this high-intensity and high-concentration race, the unoptimized design of the cab in ergonomics will easily cause the driver's visual and handling fatigue, so that the driver's attention is not concentrated. Cause the occurrence of security accidents. Moreover, lower back pain, sciatic nerve discomfort, lumbar spine diseases and other occupational diseases are basically caused by uncomfortable driving posture and unreasonable control matching, and these have a lot to do with unreasonable ergonomic design. In order to solve these problems, firstly establish the human body model of the driver, and then build the BSC racing car model by using 3D modeling software Catia. Then use the ergonomics simulation software Jack to analyze the visibility, accessibility and comfort. Based on the simulation
Liu, YuzhouLiu, Silang
Background. In 2022, vulnerable road user (VRU) deaths in the United States increased to their highest level in more than 40 years. At the same time, increasing vehicle size and taller front ends may contribute to larger forward blind zones, but little is known about the role that visual occlusion may play in this trend. Goal. Researchers measured the blind zones of six top-selling light-duty vehicle models (one pickup truck, three SUVs, and two passenger cars) across multiple redesign cycles (1997–2023) to determine whether the blind zones were getting larger. Method. To quantify the blind zones, the markerless method developed by the Insurance Institute for Highway Safety was used to calculate the occluded and visible areas at ground level in the forward 180° arc around the driver at ranges of 10 m and 20 m. Results. In the 10-m forward radius nearest the vehicle, outward visibility declined in all six vehicle models measured across time. The SUV models showed up to a 58% reduction
Epstein, Alexander K.Brodeur, AlyssaDrake, JuwonEnglin, EricFisher, Donald L.Zoepf, StephenMueller, Becky C.Bragg, Haden
Roadside perception technology is an essential component of traffic perception technology, primarily relying on various high-performance sensors. Among these, LiDAR stands out as one of the most effective sensors due to its high precision and wide detection range, offering extensive application prospects. This study proposes a voxel density-nearest neighbor background filtering method for roadside LiDAR point cloud data. Firstly, based on the relatively fixed nature of roadside background point clouds, a point cloud filtering method combining voxel density and nearest neighbor is proposed. This method involves voxelizing the point cloud data and using voxel grid density to filter background point clouds, then the results are processed through a neighbor point frame sequence to calculate the average distance of the specified points and compare with a distance threshold to complete accurate background filtering. Secondly, a VGG16-Pointpillars model is proposed, incorporating a CNN
Liu, ZhiyuanRui, Yikang
Secondary crashes, including struck-by incidents are a leading cause of line-of-duty deaths among emergency responders, such as firefighters, law enforcement officers, and emergency medical service providers. The introduction of light-emitting diode (LED) sources and advanced lighting control systems provides a wide range of options for emergency lighting configurations. This study investigated the impact of lighting color, intensity, modulation, and flash rate on driver behavior while traversing a traffic incident scene at night. The impact of retroreflective chevron markings in combination with lighting configurations, as well as the measurement of “moth-to-flame” effects of emergency lighting on drivers was also investigated. This human factors study recruited volunteers to drive a closed course traffic incident scene, at night under various experimental conditions. The simulated traffic incident was designed to replicate a fire apparatus in the center-block position. The incident
Bullough, John D.Parr, ScottHiebner, EmilySblendorio, Alec
Vehicle localization in enclosed environments, such as indoor parking lots, tunnels, and confined areas, presents significant challenges and has garnered considerable research interest. This paper proposes a localization technique based on an onboard binocular camera system, utilizing binocular ranging and spatial intersection algorithms to achieve active localization. The method involves pre-deploying reference points with known coordinates within the experimental space, using binocular ranging to measure the distance between the camera and the reference points, and applying the spatial intersection algorithm to calculate the camera’s center coordinates, thereby completing the localization process. Experimental results demonstrate that the proposed algorithm achieves sub-meter level localization accuracy. Localization accuracy is significantly influenced by the calibration precision of the binocular camera and the number of reference points. Higher calibration precision and a greater
Feifei, LiHaoping, QiYi, Wei
To improve the accuracy and reliability of short-term prediction of highway visibility level in key scenarios such as short duration and fast changing speed, this paper proposes a short-term prediction method for highway visibility level based on attention mechanism LSTM. Firstly, XGBoost and SHAP methods are used to analyze the factors affecting highway visibility, determine the importance ranking of different influencing factors, and select the factors that have a greater impact on visibility as inputs for the visibility level prediction model. Secondly, based on LSTM as the model foundation network and innovative coupling attention mechanism, a visibility level prediction model based on attention mechanism LSTM is constructed, which can dynamically update the correlation between meteorological feature information at each historical time point and the visibility level at the current prediction time, thereby dividing the importance of information and flexibly capturing important
Ding, ShanshanXiong, ZhuozhiHuang, XuLi, Yurong
This research explores the use of salt gradient solar ponds (SGSPs) as an environmentally friendly and efficient method for thermal energy storage. The study focuses on the design, construction, and performance evaluation of SGSP systems integrated with reflectors, comparing their effectiveness against conventional SGSP setups without reflectors. Both experimental and numerical methods are employed to thoroughly assess the thermal behavior and energy efficiency of these systems. The findings reveal that the SGSP with reflectors (SGSP-R) achieves significantly higher temperatures across all three zones—Upper Convective Zone (UCZ), Non-Convective Zone (NCZ), and Lower Convective Zone (LCZ)—with recorded temperatures of 40.56°C, 54.2°C, and 63.1°C, respectively. These values represent an increase of 6.33%, 11.12%, and 14.26% over the temperatures observed in the conventional SGSP (SGSP-C). Furthermore, the energy efficiency improvements in the UCZ, NCZ, and LCZ for the SGSP-R are
J, Vinoth Kumar
This research aimed to explore the integration of Virtual reality technology in ergonomically testing automotive interior designs. This objective was aimed at ensuring that such technology could be used to ameliorate user comfort through controlled simulations. Existing ergonomic testing methods are often limited when it comes to recreating actual driving situations and quickly repeating design improvements. VR could be used as a solution because its ergonomically tested simulation can be used to provide users with the real experience of driving. The users can be observed while they experience it and asked for their feedback. For this research, an interactive VR environment imitating a 10-minute-long trip through traffic and changing road conditions was created. It was populated by ten users, concatenated equally in men and women, both aged 20-35, representing approximate demographics of workers in the automotive production industry. Participants of the research were asked to use
Natrayan, L.Kaliappan, SeeniappanSwamy Nadh, V.Maranan, RamyaBalaji, V.
Visual perception systems for autonomous vehicles are exposed to a wide variety of complex weather conditions, among which rainfall is one of the weather conditions with high exposure. Therefore, it is necessary to construct a model that can efficiently generate a large number of images with different rainfall intensities to help test the visual perception system under rainfall conditions. However, the existing datasets either do not contain multilevel rainfall or are synthetic images. It is difficult to support the construction of the model. In this paper, the natural rainfall images of different rainfall intensities were first collected and produced a natural multilevel rain dataset. The dataset includes no rain and three levels (light, medium and heavy) of rainfall with the number of 629, 210, 248 and 193 respectively, totaling 1280 images. The dataset is open source and available online via: https://github.com/raydison/natural-multilevel-rain-dataset-NMRD. Subsequently, a
Liu, ZhenyuanJia, TongXing, XingyuWu, JianfengChen, Junyi
Letter from the Guest Editors
van Schijndel, MargrietSciarretta, AntonioOp den Camp, OlafKrosse, Bastiaan
This SAE Recommended Practice establishes three alternate methods for describing and evaluating the truck driver's viewing environment: the Target Evaluation, the Polar Plot and the Horizontal Planar Projection. The Target Evaluation describes the field of view volume around a vehicle, allowing for ray projections, or other geometrically accurate simulations, that demonstrate areas visible or non-visible to the driver. The Target Evaluation method may also be conducted manually, with appropriate physical layouts, in lieu of CAD methods. The Polar Plot presents the entire available field of view in an angular format, onto which items of interest may be plotted, whereas the Horizontal Planar Projection presents the field of view at a given elevation chosen for evaluation. These methods are based on the Three Dimensional Reference System described in SAE J182a. This document relates to the driver's exterior visibility environment and was developed for the heavy truck industry (Class B
Truck and Bus Human Factors Committee
Driving at night presents a myriad of challenges, with one of the most significant being visibility, especially on curved roads. Despite the fact that only a quarter of driving occurs at night, research indicates that over half of driving accidents happen during this period. This alarming statistic underscores the urgent need for improved illumination solutions, particularly on curved roads, to enhance driver visibility and consequently, safety. Conventional headlamp systems, while effective in many scenarios, often fall short in adequately illuminating curved roads, thereby exacerbating the risk of accidents during nighttime driving. In response to this critical issue, considerable efforts have been directed towards the development of alternative technologies, chief among them being Adaptive Front Lighting Systems (AFS). The primary objective of this endeavor is to design and construct a prototype AFS that can seamlessly integrate into existing fixed headlamp systems. Throughout the
T, KarthiG, ManikandanP C, MuruganS, SakthivelN, VinuP, Dineshkumar
Sensata Technologies' booth at this year's IAA Transportation tradeshow included two of the company's Precor radar sensors. The PreView STA79 is a heavy-duty vehicle side-monitoring system launched in May 2024 and designed to comply with Europe-wide blind spot monitoring legislation introduced in June 2024. The PreView Sentry 79 is a front- and rear-monitoring system. Both systems operate on the 79-GHz band as the nomenclature suggests. PreView STA79 can cover up to three vehicle zones: a configurable center zone, which can monitor the length of the vehicle, and two further zones that can be independently set to align with individual customer needs. The system offers a 180-degree field of view to eliminate blind spots along the vehicle sides and a built-in measurement unit that will increase the alert level when turning toward an object even when the turn indicator is not used. The system also features trailer mitigation to reduce false positive alerts on the trailer when turning. The
Kendall, John
The scope of this SAE Aerospace Information Report (AIR) is to discuss factors affecting visibility of aircraft navigation and anticollision lights, enabling those concerned with their use to have a better technical understanding of such factors, and to aid in exercising appropriate judgment in the many possible flight eventualities.
A-20B Exterior Lighting Committee
This study aims to elucidate the impact of A-pillar blind spots on drivers’ visibility of pedestrians during left and right turns at an intersection. An experiment was conducted using a sedan and a truck, with a professional test driver participating. The driver was instructed to maintain sole focus on a designated pedestrian model from the moment it was first sighted during each drive. The experimental results revealed how the blind spots caused by A-pillars occur and clarified the relationship between the pedestrian visible trajectory distance and specific vehicle windows. The results indicated that the shortest trajectory distance over which a pedestrian remained visible in the sedan was 17.6 m for a far-side pedestrian model during a right turn, where visibility was exclusively through the windshield. For the truck, this distance was 20.9 m for a near-side pedestrian model during a left turn, with visibility through the windshield of 9.5 m (45.5% of 20.9 m) and through the
Matsui, YasuhiroOikawa, Shoko
Most humans rely heavily on our visual abilities to function in the world—we are optically oriented. In the broadest sense, “optics” refers to the study of sight and light. At its foundation, Radiant’s business is all about optics: measuring light and the properties of light in relation to the human eye. Photometry is the science of light according to our visual perception. Colorimetry is the science of color: how our eyes interpret different wavelengths of light.
Deep learning algorithms are being widely used in autonomous driving (AD) and advanced driver assistance systems (ADAS) due to their impressive capabilities in visual perception of the environment of a car. However, the reliability of these algorithms is known to be challenging due to their data-driven and black-box nature. This holds especially true when it comes to accurate and reliable perception of objects in edge case scenarios. So far, the focus has been on normal driving situations and there is little research on evaluating these systems in a safety-critical context like pre-crash scenarios. This article describes a project that addresses this problem and provides a publicly available dataset along with key performance indicators (KPIs) for evaluating visual perception systems under pre-crash conditions.
Bakker, Jörg
A total of 93 tests were conducted in daytime conditions to evaluate the effect on the Time to Collision (TTC), emergency braking, and avoidance rates of the Forward Collision Warning (FCW) and Automatic Emergency Braking (AEB) provided by a 2022 Tesla Model 3 against a 4ActivePA adult static pedestrian target. Variables that were evaluated included the vehicle speed on approach, pedestrian offsets, pedestrian clothing, and user-selected FCW settings. As a part of the Tesla’s Collision Avoidance AssistTM, these user-selected FCW settings change the timing of the issuance of the visual and/or audible warning provided. This testing evaluated the Tesla at speeds of 25 and 35 miles per hour (mph) versus a stationary pedestrian target in early, medium, and late FCW settings. Testing was also conducted with a 50% pedestrian offset and 75% offset conditions relative to the right side of the Tesla. The pedestrian target was clothed with and without a reflective safety vest to account for
Harrington, ShawnNagarajan, Sundar RamanLau, James
In the dense fabric of urban areas, electric scooters have rapidly become a preferred mode of transportation. As they cater to modern mobility demands, they present significant safety challenges, especially when interacting with pedestrians. In general, e-scooters are suggested to be ridden in bike lanes/sidewalks or share the road with cars at the maximum speed of about 15-20 mph, which is more flexible and much faster than pedestrians and bicyclists. Accurate prediction of pedestrian movement, coupled with assistant motion control of scooters, is essential in minimizing collision risks and seamlessly integrating scooters in areas dense with pedestrians. Addressing these safety concerns, our research introduces a novel e-Scooter collision avoidance system (eCAS) with a method for predicting pedestrian trajectories, employing an advanced Long short-term memory (LSTM) network integrated with a state refinement module. This method predicts future trajectories by considering not just past
Yan, XukeShen, Dan
Temporal light modulation (TLM), colloquially known as “flicker,” is an issue in almost all lighting applications, due to widespread adoption of LED and OLED sources and their driving electronics. A subset of LED/OLED lighting systems delivers problematic TLM, often in specific types of residential, commercial, outdoor, and vehicular lighting. Dashboard displays, touchscreens, marker lights, taillights, daytime running lights (DRL), interior lighting, etc. frequently use pulse width modulation (PWM) circuits to achieve different luminances for different times of day and users’ visual adaptation levels. The resulting TLM waveforms and viewing conditions can result in distraction and disorientation, nausea, cognitive effects, and serious health consequences in some populations, occurring with or without the driver, passenger, or pedestrian consciously “seeing” the flicker. There are three visual responses to TLM: direct flicker, the stroboscopic effect, and phantom array effect (also
Miller, NaomiIrvin, Lia
The prediction of agents' future trajectory is a crucial task in supporting advanced driver-assistance systems (ADAS) and plays a vital role in ensuring safe decisions for autonomous driving (AD). Currently, prevailing trajectory prediction methods heavily rely on high-definition maps (HD maps) as a source of prior knowledge. While HD maps enhance the accuracy of trajectory prediction by providing information about the surrounding environment, their widespread use is limited due to their high cost and legal restrictions. Furthermore, due to object occlusion, limited field of view, and other factors, the historical trajectory of the target agent is often incomplete This limitation significantly reduces the accuracy of trajectory prediction. Therefore, this paper proposes ETSA-Pred, a mapless trajectory prediction model that incorporates enhanced temporal modeling and spatial self-attention. The novel enhanced temporal modeling is based on neural controlled differential equations (NCDEs
Wei, ZhaoWu, Xiaodong
ISO 26262-1:2018 defines the fault tolerant time interval (FTTI) as the minimum time span from the occurrence of a fault within an electrical / electronic system to a possible occurrence of a hazardous event. FTTI provides a time limit within which compliant vehicle safety mechanisms must detect and react to faults capable of posing risk of harm to persons. This makes FTTI a vital safety characteristic for system design. Common automotive industry practice accommodates recording fault times of occurrence definitively. However, current practice for defining the time of hazardous event onset relies upon subjective judgements. This paper presents a novel method to define hazardous event onset more objectively. The method introduces the Streetscope Collision Hazard Measure (SHMTM) and a refined approach to hazardous event classification. SHM inputs kinematic factors such as proximity, relative speed, and acceleration as well as environmental characteristics like traffic patterns
Jones, DarrenGangadhar, PavankumarMcGrail, RandallPati, SudiptaAntonsson, ErikPatel, Ravi
Ergonomics plays an important role in automobile design to achieve optimal compatibility between occupants and vehicle components. The overall goal is to ensure that the vehicle design accommodates the target customer group, who come in varied sizes, preferences and tastes. Headroom is one such metric that not only influences accommodation rate but also conveys a visual perception on how spacious the vehicle is. An adequate headroom is necessary for a good seating comfort and a relaxed driving experience. Headroom is intensely discussed in magazine tests and one of the key deciding factors in purchasing a car. SAE J1100 defines a set of measurements and standard procedures for motor vehicle dimensions. H61, W27, W35, H35 and W38 are some of the standard dimensions that relate to headroom and head clearances. While developing the vehicle architecture in the early design phase, it is customary to specify targets for various ergonomic attributes and arrive at the above-mentioned
Rajakumaran, SriramS, RahulVasireddy, Rakesh MitraNair, Suhas
SLAM (Simultaneous Localization and Mapping) plays a key role in autonomous driving. Recently, 4D Radar has attracted widespread attention because it breaks through the limitations of 3D millimeter wave radar and can simultaneously detect the distance, velocity, horizontal azimuth and elevation azimuth of the target with high resolution. However, there are few studies on 4D Radar in SLAM. In this paper, RI-FGO, a 4D Radar-Inertial SLAM method based on Factor Graph Optimization, is proposed. The RANSAC (Random Sample Consensus) method is used to eliminate the dynamic obstacle points from a single scan, and the ego-motion velocity is estimated from the static point cloud. A 4D Radar velocity factor is constructed in GTSAM to receive the estimated velocity in a single scan as a measurement and directly integrated into the factor graph. The 4D Radar point clouds of consecutive frames are matched as the odometry factor. A modified scan context method, which is more suitable for 4D Radar’s
Zihang, HeXiong, LuZhuo, GuirongGAO, LetianLu, ShouyiZhu, JiaqiLeng, Bo
For safe driving function, signs must be visible. Sign visibility is function of its luminance intensity. During day, due to ambient light conditions sign luminance is not a major concern. But during night, due to absence of sun light sign board retro-reflectivity plays a crucial role in sign visibility. The vehicle headlamp color, beam pattern, lamp installation position, the relative seating position of driver and moon light conditions are important factors. Virtual simulation approach is used for analyzing the sign board visibility. Among various factors for example the headlamp installation position from ground, distance between two lamps and eye position of driver are considered for analyzing the sign board visibility in this paper. Many automotive organizations have widely varying requirements and established testing guidelines to ensure visibility of signs in head lamp physical testing but there are no guidelines during design stage for headlamp for sign visibility. In this
Yadav, Prashant Maruti
The improvement of vehicle soiling behavior has increasing interest over the past few years not only to satisfy customer requirements and ensure a good visibility of the surrounding traffic but also for autonomous vehicles, for which soiling investigation and improvement are even more important due to the demands of the cleanliness and induced functionality of the corresponding sensors. The main task is the improvement of the soiling behavior, i.e., reduction or even prevention of soiling of specific surfaces, for example, windows, mirrors, and sensors. This is mostly done in late stages of vehicle development and performed by experiments, e.g., wind tunnel tests, which are supplemented by simulation at an early development stage. Among other sources, the foreign soiling on the side mirror and the side window depend on the droplet detaching from the side mirror housing. That is why a good understanding of the droplet formation process and the resulting droplet diameters behind the side
Kille, LukasStrohbücker, VeithNiesner, ReinholdSommer, OliverWozniak, Günter
This article presents a novel approach to optimize the placement of light detection and ranging (LiDAR) sensors in autonomous driving vehicles using machine learning. As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception. The proposed method employs the deep deterministic policy gradient (DDPG) algorithm, which takes the vehicle’s surface geometry as input and generates optimized 3D sensor positions with predicted high visibility. Through extensive experiments on various vehicle shapes and a rectangular cuboid, the effectiveness and adaptability of the proposed method are demonstrated. Importantly, the trained network can efficiently evaluate new vehicle shapes without the need for re-optimization, representing a significant improvement over classical methods such as genetic algorithms. By leveraging machine learning techniques, this research streamlines the sensor placement optimization process
Berens, FelixAmbs, JordanElser, StefanReischl, Markus
The windscreen wiping system is mandatory requirement for automotive vehicle as per Central motor vehicle rules (CMVR). The main scope of the standard is to ensure vision zones to be wiped by wiping system to ensure maximum field of vision to the driver. The evaluation of vision zones as per IS 15802:2008 is generally determined by virtual simulation by OEMs. The limitation of virtual simulation is due to actual tolerances in vehicle, due to seat fitment, ergonomic dimensions, seat cushioning effect and wiper non-effective operation which are not taken into consideration very well off. The testing methodology described in the paper is an in-house developed test method based on SAE recommended practices. With the help of 3D H-point machine and a laser based ‘Theodolite’ equipped with horizontal and vertical angle projections from single pivot point is used to develop various vision zones on an actual vehicle windscreen as per technical data. These zones are later compared with wiped
Joshi, AmolPatil, AmolDoshi, AnupNikam, ShashankBelavadi Venkataramaiah, Shamsundara
The recent progress in camera-based technologies has prompted the development of prototype camera-based video systems, intended to replace conventional passenger vehicle mirrors. Given that a significant number of collisions during lane changes stem from drivers being unaware of nearby vehicles, these camera-based systems offer the potential to enhance safety. By affording drivers a broader field of view, they facilitate the detection of potential conflicts. This project was focused on analyzing naturalistic driving data in support of the Federal Motor Vehicle Safety Standard 111 regulatory endeavors. The goal was to assess the effectiveness and safety compatibility of prototype camera-based side-view systems as potential replacements for traditional side-view mirrors. The method employed involved extracting radar data from instances of lane changes conducted by 12 drivers for two pick-up trucks includes 10018 signal-indicated lane changes performed at speeds consistent with highway
Guduri, BalachandarLlaneras, Robert
Automated driving system is a multi-source sensor data fusion system. However different type sensor has different operating frequencies, different field of view, different detection capabilities and different sensor data transition delay. Aiming at these problems, this paper introduces the processing mechanism of out of sequence measurement data into the multi-target detection and tracking system based on millimeter wave radar and camera. After the comparison of ablation experiments, the longitudinal and lateral tracking performance of the fusion system is improved in different distance ranges.
Li, Fu-XiangZhu, Yuan
Parking an articulated vehicle is a challenging task that requires skill, experience, and visibility from the driver. An automatic parking system for articulated vehicles can make this task easier and more efficient. This article proposes a novel method that finds an optimal path and controls the vehicle with an innovative method while considering its kinematics and environmental constraints and attempts to mathematically explain the behavior of a driver who can perform a complex scenario, called the articulated vehicle park maneuver, without falling into the jackknifing phenomena. In other words, the proposed method models how drivers park articulated vehicles in difficult situations, using different sub-scenarios and mathematical models. It also uses soft computing methods: the ANFIS-FCM, because this method has proven to be a powerful tool for managing uncertain and incomplete data in learning and inference tasks, such as learning from simulations, handling uncertainty, and
Rezaei Nedamani, HamidrezaSoleymanifard, MostafaSafaeifar, AliKhiabani, Parisa Masnadi
A new spatial calibration procedure has been introduced for infrared optical systems developed for cases where camera systems are required to be focused at distances beyond 100 meters. Army Combat Capabilities Development Command Armaments Center, Picatinny Arsenal, NJ All commercially available camera systems have lenses (and internal geometries) that cannot perfectly refract light waves and refocus them onto a two-dimensional (2D) image sensor. This means that all digital images contain elements of distortion and thus are not a true representation of the real world. Expensive high-fidelity lenses may have little measurable distortion, but if sufficient distortion is present, it will adversely affect photogrammetric measurements made from the images produced by these systems. This is true regardless of the type of camera system, whether it be a daylight camera, infrared (IR) camera, or camera sensitive to another part of the electromagnetic spectrum. The most common examples of large
Items per page:
1 – 50 of 840