Browse Topic: Driver assistance systems

Items (843)
With the surge in adoption of artificial intelligence (AI) in automotive systems, especially Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AV), comes an increase of AI-related incidents–several of which have ended in injuries and fatalities. These incidents all share a common deficiency: insufficient coverage towards safety, ethical, and/or legal requirements. Responsible AI (RAI) is an approach to developing AI-enabled systems that systematically take such requirements into account. Existing published international standards like ISO 21448:2022 (Safety of the Intended Functionality) and ISO 26262:2018 (Road Vehicles – Functional Safety) do offer some guidance in this regard but are far from being sufficient. Therefore, several technical standards are emerging concurrently to address various RAI-related challenges, including but not limited to ISO 8800 for the integration of AI in automotive systems, ISO/IEC TR 5469:2024 for the integration of AI in functional
Nelson, JodyLin, Christopher
Lateral driving features used in Advanced Driver Assistance Systems (ADAS) rely heavily on inputs from the vehicle's surroundings and state information. A critical component of this state information is the curvature of the Ego Vehicle, which significantly influences performance. Curvature is often utilized in lateral trajectory generation and serves as a key element of the lateral motion controller. However, obtaining accurate curvature data is challenging due to the scarcity of sensors that directly measure this parameter. Instead, curvature is typically derived from various vehicle signals and additional sensor data, often employing sophisticated estimation techniques. This paper discusses several methods for estimating vehicle curvature using diverse information sources, evaluates their effectiveness, and investigates their impact on lateral feature performance, while analyzing the associated challenges and advantages.
Awathe, ArpitVarunjikar, TejasJain, Arihant
Adaptive Cruise Control (ACC) is an advanced driver assistance system designed to manage a vehicle's longitudinal motion. Its effectiveness is critically dependent on the precision of the sensors used. While ACC algorithms are optimized for performance, the overall efficacy of the system is significantly influenced by sensor accuracy and variability. Quantifying the impact of these factors on ACC performance poses a challenge. This paper explores the effects of sensor accuracy on ACC performance through a simulation study that replicates the sensor accuracy and variability observed in realworld vehicles. Additionally, the paper examines potential strategies to mitigate performance fluctuations caused by sensor variability.
Awathe, ArpitVarunjikar, TejasRaut, Abhinandan VijayPatel, Darsh
Precisely understanding the driving environment and determining the vehicle’s accurate position is crucial for a safe automated maneuver. vehicle following systems that offer higher energy efficiency by precisely following a lead vehicle, the relative position of the ego vehicle to lane center is a key measure to a safe automated speed and steering control. This article presents a novel Enhanced Lane Detection technique with centimeter-level accuracy in estimating the vehicle offset from the lane center using the front-facing camera. Leveraging state-of-the-art computer vision models, the Enhanced Lane Detection technique utilizes YOLOv8 image segmentation, trained on a diverse world driving scenarios dataset, to detect the driving lane. To measure the vehicle lateral offset, our model introduces a novel calibration method using nine reference markers aligned with the vehicle perspective and converts the lane offset from image coordinates to world measurements. This design minimizes
Karuppiah Loganathan, Nirmal RajaPoovalappil, AmanNaber, JeffreyRobinette, DarrellBahramgiri, Mojtaba
Vehicle ADAS Systems majorly comprises of two functions: Driving and Parking. The most common form of damage to the vehicle which goes unnoticed with unidentified cause are parking damages. A vehicle once parked at a certain location may get damaged without knowledge of the user. In this work developed a solution that not only pre-warns the driver but also prepares the vehicle beforehand if it suspects a damage may occur. This eliminates the latency between damage and information capture, detects small damages such as scratches, classifies the type of damage and informs the user beforehand. This is solution is different from our competitors as the existing solutions informs the user about the scratches/damages, but these solutions are expensive, have high response time, and the damage information is captured after the damage has occurred. The solution consists of the following check blocks: Precondition, Sensor Control and Action Module. The Precondition Module observes the vehicle
Debnath, SarnabPatil, PrasadBelur Subramanya, SheshagiriGovinda, Shiva Prasad
Deliberate modifications to infrastructure can significantly enhance machine vision recognition of road sections designed for Vulnerable Road Users, such as green bike lanes. This study evaluates how green bike lanes, compared to unpainted lanes, enhance machine vision recognition and vulnerable road users safety by keeping vehicles at a safe distance and preventing encroachment into designated bike lanes. Conducted at the American Center for Mobility, this study utilizes a vehicle equipped with a front-facing camera to assess green bike lane recognition capabilities across various environmental conditions including dry daytime, dry nighttime, rain, fog, and snow. Data collection involved gathering a comprehensive dataset under diverse conditions and generating masks for lane markings to perform comparative analysis for training Advanced Driver Assistance Systems. Quality measurement and statistical analysis are used to evaluate the effectiveness of machine vision recognition using
Ponnuru, Venkata Naga RithikaDas, SushantaGrant, JosephNaber, JeffreyBahramgiri, Mojtaba
Personalization is a growing topic in the automotive space, where Artificial Intelligence can be used to deliver a customized experience in features like seat positioning and climate control. Considering that the leading cause of accidents is driving at an inappropriate speed, personalizing the speed limit for a driver can greatly improve vehicle safety. Current speed limits apply to all drivers, irrespective of skill, including special speed limits when there are adverse weather conditions. As these speed limits do not consider an individual’s skill and capabilities, the limit could still be inappropriate for a given driver in that specific driving context. Therefore, we propose a system that can profile the driver’s style to recommend a personalized speed limit, based on both the environmental context and their skill in that environment. The system uses a neural network to classify the driver’s behavior in specific environments by monitoring the vehicle data and the environmental
Perumal, RathapriyaChouhan, MadhvendraRangarajan, Rishi
Autonomous vehicles utilise sensors, control systems and machine learning to independently navigate and operate through their surroundings, offering improved road safety, traffic management and enhanced mobility. This paper details the development, software architecture and simulation of control algorithms for key functionalities in a model that approaches Level 2 autonomy, utilising MATLAB Simulink and IPG CarMaker. The focus is on four critical areas: Autonomous Emergency Braking (AEB), Adaptive Cruise Control (ACC), Lane Detection (LD) and Traffic Object Detection. Also, the integration of low-level PID controllers for precise steering, braking and throttle actuation, ensures smooth and responsive vehicle behaviour. The hardware architecture is built around the Nvidia Jetson Nano and multiple Arduino Nano microcontrollers, each responsible for controlling specific actuators within the drive-by-wire system, which includes the steering, brake and throttle actuators. Communication
Ann Josy, TessaSadique, AnwarThomas, MerlinManaf T M, AshikVr, Sreeraj
This SAE Recommended Practice establishes a uniform, powered vehicle test procedure and minimum performance requirement for lane departure warning systems used in highway trucks and buses greater than 4546 kg (10000 pounds) gross vehicle weight (GVW). Systems similar in function but different in scope and complexity, including lane keeping/lane assist and merge assist, are not included in this document. This document does not apply to trailers, dollies, etc. This document does not intend to exclude any particular system or sensor technology. This document will test the functionality of the lane departure warning system (LDWS) (e.g., ability to detect lane presence and ability to detect an unintended lane departure), its ability to indicate LDWS engagement, its ability to indicate LDWS disengagement, and its ability to determine the point at which the LDWS notifies the human machine interface (HMI) or vehicle control system that a lane departure event is detected. Moreover, this
Truck and Bus Automation Safety Committee
Light detection and ranging (LiDAR) sensors are increasingly applied to automated driving vehicles. Microelectromechanical systems are an established technology for making LiDAR sensors cost-effective and mechanically robust for automotive applications. These sensors scan their environment using a pulsed laser to record a point cloud. The scanning process leads in the point cloud to a distortion of objects with a relative velocity to the sensor. The consecutive generation and processing of points offers the opportunity to enrich the measured object data from the LiDAR sensors with velocity information by extracting information with the help of machine learning, without the need for object tracking. Turning it into a so-called 4D-LiDAR. This allows object detection, object tracking, and sensor data fusion based on LiDAR sensor data to be optimized. Moreover, this affects all overlying levels of autonomous driving functions or advanced driver assistance systems. However, since such
Haas, LukasHaider, ArsalanKastner, LudwigKuba, MatthiasZeh, ThomasJakobi, MartinKoch, Alexander Walter
Advanced Driver Assistance Systems (ADAS) are technologies that automate, facilitate, and improve the vehicle’s systems. Indeed, these systems directly interfere with braking, acceleration, and drivability of driving operations. Thus, the use of ADAS directly reflects the psychology behind driving a vehicle, which can have an automation level that varies from fully manual (Level 0) to fully autonomous (Level 5). Even though ADAS technologies provide safer driving, it is still a challenge to understand the complexity of human factors that influence and interact with these new technologies. Also, there has been limited exploration of the correlation between the physical and cognitive driver reactions and the characteristics of Brazilian roads and traffic. Therefore, the present work sought to establish a preliminary investigation into a method for evaluating the driving response profile under the influence of ADAS technologies, such as Lane Centering and Forward Collision Warning, on
Castro, Gabriel M.Silva, Rita C.Miosso, Cristiano J.Oliveira, Alessandro B. S.
Traditional pedestrian detection methods have poor robustness. Deep learning-based methods have shown high performance in recent years but rely on substantial computational resources. Developing a lightweight, deep learning-based pedestrian detection algorithm is essential for applying deep learning-based algorithms in resource-limited scenarios, such as driverless and advanced driver assistance systems. In this article, an improved model based on YOLOv3 called “YOLOPD” (You Only Look Once—Pedestrian Detection), is proposed. It is obtained by constructing a self-attentive module, introducing a CIOU (Complete Intersection over Union) loss function and a depth separated convolutional layer. Experimental results show that on the INRIA (National Institute for Research in Computer Science and Automation), Caltech, and CityPerson pedestrian dataset, the MR (miss rate) of the model YOLOPD is better than that of the original YOLOv3 model, and the number of parameters is reduced by about 1/3
Li, ShanglinWang, Qi FengLi, Ren FaXiao, Juan
The rapid evolution of new technologies in the automotive sector is driving the demand for advanced simulation solutions, enabling faster software development cycles. Developers often encounter challenges in managing the vast amounts of data generated during testing. For example, a single Advanced Driver Assistance System (ADAS) test vehicle can produce several terabytes of data daily. Efficiently handling and distributing this data across multiple locations can introduce delays in the development process. Moreover, the large volume of test cases required for simulation and validation further exacerbates these delays. On-premises simulation setups, especially those dependent on High-Performance Computing (HPC) systems, pose several challenges, including limited computational resources, scalability issues, high capital and maintenance costs, resource management inefficiencies, and compatibility problems between GPU drivers and servers, all of which can impact both performance and costs
Ramapuram, Vinay GoudDhar, JayshriMunaiahgari, Mallikarjuna Reddy
In an era where automotive technology is rapidly advancing towards autonomy and connectivity, the significance of Ethernet in ensuring automotive cybersecurity cannot be overstated. As vehicles increasingly rely on high-speed communication networks like Ethernet, the seamless exchange of information between various vehicle components becomes paramount. This paper introduces a pioneering approach to fortifying automotive security through the development of an Ethernet-Based Intrusion Detection System (IDS) tailored for zonal architecture. Ethernet serves as the backbone for critical automotive applications such as advanced driver-assistance systems (ADAS), infotainment systems, and vehicle-to-everything (V2X) communication, necessitating high-bandwidth communication channels to support real-time data transmission. Additionally, the transition from traditional domain-based architectures to zonal architectures underscores Ethernet's role in facilitating efficient communication between
Appajosyula, kalyanSaiVitalVamsi
The off-highway industry witnesses a vast growth in integrating new technologies such as advance driver assistance systems (ADAS/ADS) and connectivity to the vehicles. This is primarily due to the need for providing a safe operational domain for the operators and other people. Having a full perception of the vehicle’s surrounding can be challenging due to the unstructured nature of the field of operation. This research proposes a novel collective perception system that utilizes a C-V2X Roadside Unit (RSU)-based object detection system as well as an onboard perception system. The vehicle uses the input from both systems to maneuver the operational field safely. This article also explored implementing a software-defined vehicle (SDV) architecture on an off-highway vehicle aiming to consolidate the ADAS system hardware and enable over-the-air (OTA) software update capability. Test results showed that FEV’s collective perception system was able to provide the necessary nearby and non-line
Feiguel, MatthieuObando, DavidAlzubi, HamzehAlRousan, QusayTasky, Thomas
Exactly when sensor fusion occurs in ADAS operations, late or early, impacts the entire system. Governments have been studying Advanced Driver Assistance Systems (ADAS) since at least the late 1980s. Europe's Generic Intelligent Driver Support initiative ran from 1989 to 1992 and aimed “to determine the requirements and design standards for a class of intelligent driver support systems which will conform with the information requirements and performance capabilities of the individual drivers.” Automakers have spent the past 30 years rolling out such systems to the buying public. Toyota and Mitsubishi started offering radar-based cruise control to Japanese drivers in the mid-1990s. Mercedes-Benz took the technology global with its Distronic adaptive cruise control in the 1998 S-Class. Cadillac followed that two years later with FLIR-based night vision on the 2000 Deville DTS. And in 2003, Toyota launched an automated parallel parking technology called Intelligent Parking Assist on the
Ramsey, Jonathon
Sensata Technologies' booth at this year's IAA Transportation tradeshow included two of the company's Precor radar sensors. The PreView STA79 is a heavy-duty vehicle side-monitoring system launched in May 2024 and designed to comply with Europe-wide blind spot monitoring legislation introduced in June 2024. The PreView Sentry 79 is a front- and rear-monitoring system. Both systems operate on the 79-GHz band as the nomenclature suggests. PreView STA79 can cover up to three vehicle zones: a configurable center zone, which can monitor the length of the vehicle, and two further zones that can be independently set to align with individual customer needs. The system offers a 180-degree field of view to eliminate blind spots along the vehicle sides and a built-in measurement unit that will increase the alert level when turning toward an object even when the turn indicator is not used. The system also features trailer mitigation to reduce false positive alerts on the trailer when turning. The
Kendall, John
Advances in vehicle sensing and communication technologies are enabling new opportunities for intelligent driver assistance systems that enhance road safety and performance. This paper provides a comprehensive review of recent research on two complementary areas: haptic/tactile interfaces for conveying road terrain and hazard information to drivers, and shared control frameworks that employ assistive automation to supplement driver inputs. Various haptic feedback techniques for generating realistic road feel through steering wheel torque overlays, pedal interventions, and alternative interface modalities are examined. Control assistance approaches integrating environmental perception to provide steering, braking, and collision avoidance support through blended human–machine control are also analyzed. The paper scrutinizes methods for road sensing using cameras, LiDAR, and radar to classify terrain for adapting system response. Evaluation practices across this domain are critically
Shata, Abdelrahman Ali AdelNaghdy, FazelDu, Haiping
While weaponizing automated vehicles (AVs) seems unlikely, cybersecurity breaches may disrupt automated driving systems’ navigation, operation, and safety—especially with the proliferation of vehicle-to-everything (V2X) technologies. The design, maintenance, and management of digital infrastructure, including cloud computing, V2X, and communications, can make the difference in whether AVs can operate and gain consumer and regulator confidence more broadly. Effective cybersecurity standards, physical and digital security practices, and well-thought-out design can provide a layered approach to avoiding and mitigating cyber breaches for advanced driver assistance systems and AVs alike. Addressing cybersecurity may be key to unlocking benefits in safety, reduced emissions, operations, and navigation that rely on external communication with the vehicle. Automated Vehicles and Infrastructure Enablers: Cybersecurity focuses on considerations regarding cybersecurity and AVs from the
Coyner, KelleyBittner, Jason
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out.
Blanco, Sebastian
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
Dinkel, John
iMotions employs neuroscience and AI-powered analysis tools to enhance the tracking, assessment and design of human-machine interfaces inside vehicles. The advancement of vehicles with enhanced safety and infotainment features has made evaluating human-machine interfaces (HMI) in modern commercial and industrial vehicles crucial. Drivers face a steep learning curve due to the complexities of these new technologies. Additionally, the interaction with advanced driver-assistance systems (ADAS) increases concerns about cognitive impact and driver distraction in both passenger and commercial vehicles. As vehicles incorporate more automation, many clients are turning to biosensor technology to monitor drivers' attention and the effects of various systems and interfaces. Utilizing neuroscientific principles and AI, data from eye-tracking, facial expressions and heart rate are informing more effective system and interface design strategies. This approach ensures that automation advancements
Nguyen, Nam
North America's first electric, fully integrated custom cab and chassis refuse collection vehicle - slated for initial customer deliveries in mid-2024 - is equipped with a standard advanced driver-assistance system (ADAS). “A typical garbage truck uses commercial off-the-shelf active safety technologies, but the electrified McNeilus Volterra ZSL was purpose-built with active safety technologies to serve our refuse collection customer,” said Brendan Chan, chief engineer for autonomy and active safety at Oshkosh Corporation, McNeilus' parent company. “We wanted to make the safest and best refuse collection truck out there. And by using cloud-based simulation, we could accelerate the development of ADAS and other technologies,” Chan said in an interview with Truck & Off-Highway Engineering during the 2024 dSPACE User Conference in Plymouth, Michigan.
Buchholz, Kami
ADAS (Advanced Driver Assistance Systems) is a growing technology in automotive industry, intended to provide safety and comfort to the passengers with the help of variety of sensors like radar, camera, LIDAR etc. Though ADAS improved safety of passengers comparing to conventional non-ADAS vehicles, still it has some grey areas for safety enhancement and easy assistance to drivers. BSW (Blind Spot Warning) and LCA (Lane Change Assist) are ADAS function which assists the driver for lane changing. BSW alerts the driver about the vehicles which are in blind zone in adjacent lanes and LCA alerts the driver about approaching vehicles at a high velocity in adjacent lanes. In current ADAS systems, BSW and LCA alerts are given as optical and acoustic warnings which is placed in vehicle side mirrors. During lane change the driver must see the side mirrors to take a decision. Due to this, there is a reaction time for taking a decision since driver must divert attention from windshield to side
R, ManjunathSaddaladinne, Jagadeesh BabuD, Gopinath
Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions. We address this need by evaluating the influence of hazards on the resilience of three different lane detection methods in simulation: (1) traditional camera detection using a U-Net algorithm, (2) radar detections using infrastructure-based radar retro-reflectors (RRs
Patil, PriteshFanas Rojas, JohanKadav, ParthSharma, SachinMasterson, AlexandraWang, RossEkti, AliDaHan, LiaoBrown, NicolasAsher, Zachary
While various Advanced Driver Assistance System (ADAS) features have become more prevalent in passenger vehicles, their ability to potentially avoid or mitigate vehicle crashes has limitations. Due to current technological limitations, forward collision mitigation technologies such as Forward Collision Warning (FCW) and Automated Emergency Braking (AEB) lack the ability to consistently perform in many unique and challenging scenarios. These limitations are often outlined in driver manuals for ADAS equipped vehicles. One such scenario is the case of a stationary lead vehicle at the side of the road. This is generally considered to be a challenging scenario for FCW and AEB to address because it can often be difficult for the system to discern this threat accurately and consistently from non-threatening roadway infrastructure without unnecessary or nuisance system activations. This is made more difficult when the stationary lead vehicle is only partially in the driving lane and not
Scally, SeanParadiso, MarcKoszegi, GiacomoEaster, CaseyKuykendal, MichelleAlexander, Ross
The current approach for new Advanced Driver Assistance System (ADAS) and Connected and Automated Driving (CAD) function development involves a significant amount of public road testing which is inefficient due to the number miles that need to be driven for rare and extreme events to take place, thereby being very costly also, and unsafe as the rest of the road users become involuntary test subjects. A new development, evaluation and demonstration method for safe, efficient, and repeatable development, demonstration and evaluation of ADAS and CAD functions called Vehicle-in-Virtual –Environment (VVE) was recently introduced as a solution to this problem. The vehicle is operated in a large, empty, and flat area during VVE while its localization and perception sensor data is fed from the virtual environment with other traffic and rare and extreme events being generated as needed. The virtual environment can be easily configured and modified to construct different testing scenarios on
Cao, XinchengChen, HaochongGelbal, Sukru YarenAksun Guvenc, BilinGuvenc, Levent
This paper has been withdrawn by the publisher because of non-attendance and not presenting at WCX 2024.
Amin, Mohammad Has
The rise of Software-Defined Vehicles (SDV) has rapidly advanced the development of Advanced Driver Assistance Systems (ADAS), Autonomous Vehicle (AV), and Battery Electric Vehicle (BEV) technology. While AVs need power to compute data from perception to controls, BEVs need the efficiency to optimize their electric driving range and stand out compared to traditional Internal Combustion Engine (ICE) vehicles. AVs possess certain shortcomings in the current world, but SAE Level 2+ (L2+) Automated Vehicles are the focus of all major Original Equipment Manufacturers (OEMs). The most common form of an SDV today is the amalgamation of AV and BEV technology on the same platform which is prominently available in most OEM’s lineups. As the compute and sensing architectures for L2+ automated vehicles lean towards a computationally expensive centralized design, it may hamper the most important purchasing factor of a BEV, the electric driving range. This research asserts that the development of
Kothari, AadiTalty, TimothyHuxtable, ScottZeng, Haibo
Objection detection using a camera sensor is essential for developing Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) vehicles. Due to the recent advancement in deep Convolution Neural Networks (CNNs), object detection based on CNNs has achieved state-of-the-art performance during daytime. However, using an RGB camera alone in object detection under poor lighting conditions, such as sun flare, snow, and foggy nights, causes the system's performance to drop and increases the likelihood of a crash. In addition, the object detection system based on an RGB camera performs poorly during nighttime because the camera sensors are susceptible to lighting conditions. This paper explores different pedestrian detection systems at low-lighting conditions and proposes a sensor-fused pedestrian detection system under low-lighting conditions, including nighttime. The proposed system fuses RGB and infrared (IR) thermal camera information. IR thermal cameras are used as they are
Thota, Bharath kumarSomashekar, KarthikPark, Jungme
With the development of vehicles equipped with automated driving systems, the need for systematic evaluation of AV performance has grown increasingly imperative. According to ISO 34502, one of the safety test objectives is to learn the minimum performance levels required for diverse scenarios. To address this need, this paper combines two essential methodologies - scenario-based testing procedures and scoring systems - to systematically evaluate the behavioral competence of AVs. In this study, we conduct comprehensive testing across diverse scenarios within a simulator environment following Mcity AV Driver Licensing Test procedure. These scenarios span several common real-world driving situations, including BV Cut-in, BV Lane Departure into VUT Path from Opposite Direction, BV Left Turn Across VUT Path, and BV Right Turn into VUT Path scenarios. Furthermore, the test cases are divided into different risk levels, allowing the AV to be tested in a variety of risk-level situations, with a
Wang, TinghanRahimi, ShujauddinSwaminathan, SunderZaidi, MohsinWishart, JeffreyLiu, Henry
Robustness testing of Advanced Driver Assistance Systems (ADAS) features is a crucial step in ensuring the safety and reliability of these systems. ADAS features include technologies like adaptive cruise control, lateral and longitudinal controls, automatic emergency braking, and more. These systems rely on various sensors, cameras, radar, lidar, and software algorithms to function effectively. Robustness testing aims to identify potential vulnerabilities and weaknesses in these systems under different conditions, ensuring they can handle unexpected scenarios and maintain their performance. Mileage accumulation is one of the validation methods for achieving robustness. It involves subjecting the systems to a wide variety of real-world driving conditions and driving scenarios to ensure the reliability, safety, and effectiveness of the ADAS features. Following ISO 21448 (Safety of the intended functionality-SOTIF), known hazardous scenarios can be tested and validated through robustness
Almasri, HossamFan, Hsing-HuaMudunuri, Venkateswara Raju
The advent of Vehicle-to-Everything (V2X) communication has revolutionized the automotive industry, particularly with the rise of Advanced Driver Assistance Systems (ADAS). V2X enables vehicles to communicate not only with each other (V2V) but also with infrastructure (V2I) and pedestrians (V2P), enhancing road safety and efficiency. ADAS, which includes features like adaptive cruise control and automatic intersection navigation, relies on V2X data exchange to make real-time decisions and improve driver assistance capabilities. Over the years, the progress of V2X technology has been marked by standardization efforts, increased deployment, and a growing ecosystem of connected vehicles, paving the way for safer and more efficient automated navigation. The EcoCAR Mobility Challenge was a 4-year student competition among 12 universities across the United States and Canada sponsored by the U.S. Department of Energy, MathWorks, and General Motors, where each team received a 2019 Chevrolet
Chowduri, SuhritMidlam-Mohler, ShawnSingh, Karun Prateek
When investigating traffic accidents, it is important to determine the causes. To do so, it is necessary to reconstruct the accident situation accurately and in detail using objective and diverse information. We propose a method for reconstructing the accident situation (“reconstruction method”) which consists of rebuilding the situation immediately before the collision (“pre-crash situation”) using data collected during that time by an event data recorder (EDR) and a dashboard camera (DBC) onboard one or both of the vehicles involved. First, the vehicle’s traveling trajectory was integrally calculated using the vehicle speed and yaw rate recorded by the EDR, each point along the trajectory being linked to the EDR data. After being combined with the DBC’s video data, the trajectory was projected onto the road surface around the accident site, which allowed us not only to display on a single road map the vehicle’s traveling trajectory, but also to provide, on each point along the
Matsumura, HidekiSugiyama, MotokiIWATA, Takekazu
Lane detection plays a critical role in autonomous vehicles for safe and reliable navigation. Lane detection is traditionally accomplished using a camera sensor and computer vision processing. The downside of this traditional technique is that it can be computationally intensive when high quality images at a fast frame rate are used and has reliability issues from occlusion such as, glare, shadows, active road construction, and more. This study addresses these issues by exploring alternative methods for lane detection in specific scenarios caused from road construction-induced lane shift and sun glare. Specifically, a U-Net, a convolutional network used for image segmentation, camera-based lane detection method is compared with a radar-based approach using a new type of sensor previously unused in the autonomous vehicle space: radar retro-reflectors. This evaluation is performed using ground truth data, obtained by measuring the lane positions and transforming them into pixel
Brown, Nicolas EricPatil, PriteshSharma, SachinKadav, ParthFanas Rojas, JohanHong, Guan YueDaHan, LiaoEkti, AliWang, RossMeyer, RickAsher, Zachary
Driver steering feature clustering aims to understand driver behavior and the decision-making process through the analysis of driver steering data. It seeks to comprehend various steering characteristics exhibited by drivers, providing valuable insights into road safety, driver assistance systems, and traffic management. The primary objective of this study is to thoroughly explore the practical applications of various clustering algorithms in processing driver steering data and to compare their performance and applicability. In this paper, principal component analysis was employed to reduce the dimension of the selected steering feature parameters. Subsequently, K-means, fuzzy C-means, the density-based spatial clustering algorithm, and other algorithms were used for clustering analysis, and finally, the Calinski-Harabasz index was employed to evaluate the clustering results. Furthermore, the driver steering features were categorized into lateral and longitudinal categories. Different
Chen, ChenZong, Changfu
Plug-In Hybrid Vehicles (PHEV) have been of significant importance recently to comply with future CO2 and pollutant emissions limit. However, performance of these vehicles is closely related to the energy management strategy (EMS) used to ensure minimum fuel consumption and maximize electric driving range. While conventional EMS concepts are developed to operate in wide range of scenarios, this approach could potentially compromise the fuel consumption benefit due to the omission of route and traffic information. With the advancements in the availability of real-time traffic, navigation and driving route information, the EMS can be further optimized to extract the complete potential of a PHEV. In this context, this paper presents application of predictive energy management (PEM) functionalities combined with information such as live traffic data to reduce the fuel consumption for a P1/P3 configuration PHEV vehicle. The proposed PEM uses on-board navigation and E-horizon data based on
Liu, XuewuSrivastava, VivekPan, WangSchaub, JoschkaSun, JianqiangTian, XiDeng, YunfeiXiong, JieWu, XiaojunMuthyala, PaulXu, Xiangyang
Adaptive cruise control is one of the key technologies in advanced driver assistance systems. However, improving the performance of autonomous driving systems requires addressing various challenges, such as maintaining the dynamic stability of the vehicle during the cruise process, accurately controlling the distance between the ego vehicle and the preceding vehicle, resisting the effects of nonlinear changes in longitudinal speed on system performance. To overcome these challenges, an adaptive cruise control strategy based on the Takagi-Sugeno fuzzy model with a focus on ensuring vehicle lateral stability is proposed. Firstly, a collaborative control model of adaptive cruise and lateral stability is established with desired acceleration and additional yaw moment as control inputs. Then, considering the effect of the nonlinear change of the longitudinal speed on the performance of the vehicle system. And the input penalty factor of the adaptive cruise control system is designed as a
Yan, YangXin, YafeiZheng, Hongyu
Kognic's advanced interpretation of sensor data helps artificial intelligence and machine learning recognize the human thing to do. In December 2023, Kognic, the Gothenburg, Sweden-based developer of a software platform to analyze and optimize the massively complex datasets behind ADAS and automated-driving systems, was in Dearborn, Michigan to accept the Tech.AD USA award for Sensor Perception solution of the year. The company doesn't make sensors, but one might say it makes sense of the data that comes from sensors. Kognic, established in 2018, is well-known in the ADAS/AV software sector for its work to help developers extract better performance from and enhance the robustness of safety-critical “ground-truth” information gleaned from petabytes-upon-petabytes of sensor-fusion datasets. Kognic CEO and co-founder Daniel Langkilde espoused a path for improving artificial intelligence-reliant systems based on “programming with data instead of programming with code.”
Visnic, Bill
India is one of the largest markets for the automobile sector and considering the trends of road fatalities and injuries related to road accidents, it is pertinent to continuously review the safety regulations and introduce standards which promise enhanced safety. With this objective, various Advanced Driver Assistance Systems (ADAS) regulations are proposed to be introduced in the Indian market. ADAS such as, Anti-lock Braking Systems, Advanced Emergency Braking systems, Lane Departure Warning Systems, Auto Lane Correction Systems, Driver Drowsiness Monitoring Systems, etc., assist the driver during driving. They tend to reduce road accidents and related fatalities by their advanced and artificial intelligent fed programs. This paper will share an insight on the past, recent trends and the upcoming developments in the regulation domain with respect to safety.
Nayak, PratikRawal, VishalPatil, KamaleshTandon, VikramBadusha, Akbar
The paper talks about Quantification of Alertness for vision based Driver Drowsiness and Alertness Warning System (DDAWS). The quantification of alertness, as per Karolinska Sleepiness Scale (KSS), reads the basic input of facial features & behaviour recognition of driver in a standard manner. Although quantification of alertness is inconclusive with respect to the true value, the paper emphasised on systematic validation process of the system covering various scenarios in order to evaluate the system’s functionality very close to the reality. The methodology depends on definition of threshold values of blink and head pose. The facial features are defined by number of blinks with classification of heavy blink and light blink and head pose in (x, y, z) directions. The Human Machine Interface (HMI) warnings are selected in the form of visual and acoustic signals. Frequency, Amplitude and Illumination of HMI alerts are specified. The protocols and trigger functions are defined and KSS
Balasubrahmanyan, ChappagaddaAkbar Badusha, AViswanatham, Satish
As the automotive industry is coming up with various ADAS solutions, RADAR is playing an important role. There are many parameters concerning RADAR detections to acknowledge. Unsupervised Clustering methods are used for RADAR applications. DBSCAN clustering method which is widely used for RADAR applications. The existing clustering DBSCAN is not aligned very well with its hyperparameters such as epsilon (the radius within which each data point checks the density) and minimum points (minimum data points required within a circle to check for core point) for which a calibration is needed. In this paper, different methods to choose the hyperparameters of DBSCAN are compared and verified with different clustering evaluation criteria. A novel method to select hyperparameters of the DBSCAN algorithm is presented with the paper. For testing the given algorithm, ground truth data is collected, and the results are verified with MATLAB-Simulink.
Payghan, Vaibhav SantoshPrajapati, MiitChauhan, Abhisha
The technology in the automotive industry is evolving rapidly in recent times. Thus, with the development of new technologies, the challenges are also ever-increasing from an Electromagnetic Interference and Susceptibility (EMI/EMC) perspective. A lot of the latest technologies in Adaptive Driver Assistance Systems (ADAS), which include Rear Drive Assist, Blind Spot Detection (BSD), Lane Change Assist (LCA) to name a few, and other features like Anti-Braking System (ABS), Emergency Brake Assist (EBD) etc. rely heavily on different types of sensors and their detection circuitry. In addition, a lot of other internal functions in the Engine Control Unit (ECU) also depend on such sensors’ functionalities. Thus, it becomes imperative to study the potential impact of higher field emissions on the immunity behaviour of the sensors. In this paper, we will study the immunity behaviour of such an automotive capacitive touch-sensing integrated circuit (IC) and its impact on the application of the
Boya, Vinay KumarAdhyapak, AnoopKomma, VineethaSahoo, Manoranjan
With the revolutionary advancements in modern transportation, offering advanced connectivity, automation, and data-driven decision-making has put the intelligent transportation systems (ITS) to a high risk from being exposed to cyber threats. Development of modern transportation infrastructure, connected vehicle technology and its dependency over the cloud with an aim to enhance safety, efficiency, reliability and sustainability of ITS comes with a lot more opportunities to protect the system from black hats. This paper explores the landscape of cyber threats targeting ITS, focusing on their potential impacts, vulnerabilities, and mitigation strategies. The cyber-attacks in ITS are not just limited to Unauthorized Access, Malware and Ransomware Attacks, Data Breaches, Denial of Service but also to Physical Infrastructure Attacks. These attacks may result in potentially disrupting critical transportation infrastructure, compromise user safety, and can cause economic losses effecting the
Dewangan, Kheelesh KumarPanda, VibekOjha, SunilShahapure, AnjaliJahagirdar, Shweta Rajesh
Autonomous Emergency Braking (AEB) systems play a critical role in ensuring vehicle safety by detecting potential rear-end collisions and automatically applying brakes to mitigate or prevent accidents. This paper focuses on establishing a framework for the Verification & Validation (V&V) of Advanced Driver Assistance Systems (ADAS) by testing & verifying the functionality of a RADAR-based AEB ECU. A comprehensive V&V approach was adopted, incorporating both virtual and physical testing. For virtual testing, closed-loop Hardware-in-Loop (HIL) simulation technique was employed. The AEB ECU was interfaced with the real-time hardware via CAN. Data for the relevant target such as the target position, velocity etc. was calculated using an ideal RADAR sensor model running on the real-time hardware. The methodology involved conducting a series of test scenarios, including various driving speeds, obstacle types, and braking distances. Automation was leveraged to perform automated testing and
Bhagat, AjinkyaKale, Jyoti GaneshPachhapurkar, NinadKarle, ManishR, ManishKarle, Ujjwala
Items per page:
1 – 50 of 843