Browse Topic: Driver assistance systems
With the surge in adoption of artificial intelligence (AI) in automotive systems, especially Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AV), comes an increase of AI-related incidents–several of which have ended in injuries and fatalities. These incidents all share a common deficiency: insufficient coverage towards safety, ethical, and/or legal requirements. Responsible AI (RAI) is an approach to developing AI-enabled systems that systematically take such requirements into account. Existing published international standards like ISO 21448:2022 (Safety of the Intended Functionality) and ISO 26262:2018 (Road Vehicles – Functional Safety) do offer some guidance in this regard but are far from being sufficient. Therefore, several technical standards are emerging concurrently to address various RAI-related challenges, including but not limited to ISO 8800 for the integration of AI in automotive systems, ISO/IEC TR 5469:2024 for the integration of AI in functional
In the domain of advanced driver assistance systems and autonomous vehicles, precise perception and interpretation of the vehicle's environment are not merely requirements they are the very foundation upon which every aspect of functionality and safety is constructed. One prevalent method of representing the environment is through the use of an occupancy grid map. This map segments the environment into distinct grid cells, each of which is evaluated to determine if it is occupied or free. This evaluation operates under the assumption that each grid cell is independent of the others. The underlying mathematical structure of this system is the binary Bayes filter (BBF). The BBF integrates sensor data from various sources and can incorporate measurements taken at different times. The occupancy grid map does not rely on the identification of individual objects, which allows it to depict obstacles of any shape. This flexibility is a key advantage of this approach. Traditional occupancy grid
Automotive industries focus on driver safety leading to raising improvements and advancements in Advanced Driver Assistance Systems (ADAS) to avoid collisions and provide safety and comfort to the drivers. This paper proposes a novel approach toward Driver health and fatigue monitoring systems that uses cabin cameras and biometric sensors communicating continuously with vehicle telematics systems to enhance real-time monitoring and predictive intervention. The data from the camera and biometric sensors is sent to the machine learning algorithm (LSBoost) which processes the data and if anything is wrong concerning the driver's behavior then immediately it communicates with vehicle telematics and sends information to the emergency services. This approach enhances driver safety and reduces accidents caused due to health-related driver impairment. This system comprises several sensors and fusion algorithms are applied between different sensors like cabin camera and biometric sensors, all
Autonomous vehicles utilise sensors, control systems and machine learning to independently navigate and operate through their surroundings, offering improved road safety, traffic management and enhanced mobility. This paper details the development, software architecture and simulation of control algorithms for key functionalities in a model that approaches Level 2 autonomy, utilising MATLAB Simulink and IPG CarMaker. The focus is on four critical areas: Autonomous Emergency Braking (AEB), Adaptive Cruise Control (ACC), Lane Detection (LD) and Traffic Object Detection. Also, the integration of low-level PID controllers for precise steering, braking and throttle actuation, ensures smooth and responsive vehicle behaviour. The hardware architecture is built around the Nvidia Jetson Nano and multiple Arduino Nano microcontrollers, each responsible for controlling specific actuators within the drive-by-wire system, which includes the steering, brake and throttle actuators. Communication
This SAE Recommended Practice establishes a uniform, powered vehicle test procedure and minimum performance requirement for lane departure warning systems used in highway trucks and buses greater than 4546 kg (10000 pounds) gross vehicle weight (GVW). Systems similar in function but different in scope and complexity, including lane keeping/lane assist and merge assist, are not included in this document. This document does not apply to trailers, dollies, etc. This document does not intend to exclude any particular system or sensor technology. This document will test the functionality of the lane departure warning system (LDWS) (e.g., ability to detect lane presence and ability to detect an unintended lane departure), its ability to indicate LDWS engagement, its ability to indicate LDWS disengagement, and its ability to determine the point at which the LDWS notifies the human machine interface (HMI) or vehicle control system that a lane departure event is detected. Moreover, this
The rapid evolution of new technologies in the automotive sector is driving the demand for advanced simulation solutions, enabling faster software development cycles. Developers often encounter challenges in managing the vast amounts of data generated during testing. For example, a single Advanced Driver Assistance System (ADAS) test vehicle can produce several terabytes of data daily. Efficiently handling and distributing this data across multiple locations can introduce delays in the development process. Moreover, the large volume of test cases required for simulation and validation further exacerbates these delays. On-premises simulation setups, especially those dependent on High-Performance Computing (HPC) systems, pose several challenges, including limited computational resources, scalability issues, high capital and maintenance costs, resource management inefficiencies, and compatibility problems between GPU drivers and servers, all of which can impact both performance and costs
The off-highway industry witnesses a vast growth in integrating new technologies such as advance driver assistance systems (ADAS/ADS) and connectivity to the vehicles. This is primarily due to the need for providing a safe operational domain for the operators and other people. Having a full perception of the vehicle’s surrounding can be challenging due to the unstructured nature of the field of operation. This research proposes a novel collective perception system that utilizes a C-V2X Roadside Unit (RSU)-based object detection system as well as an onboard perception system. The vehicle uses the input from both systems to maneuver the operational field safely. This article also explored implementing a software-defined vehicle (SDV) architecture on an off-highway vehicle aiming to consolidate the ADAS system hardware and enable over-the-air (OTA) software update capability. Test results showed that FEV’s collective perception system was able to provide the necessary nearby and non-line
Exactly when sensor fusion occurs in ADAS operations, late or early, impacts the entire system. Governments have been studying Advanced Driver Assistance Systems (ADAS) since at least the late 1980s. Europe's Generic Intelligent Driver Support initiative ran from 1989 to 1992 and aimed “to determine the requirements and design standards for a class of intelligent driver support systems which will conform with the information requirements and performance capabilities of the individual drivers.” Automakers have spent the past 30 years rolling out such systems to the buying public. Toyota and Mitsubishi started offering radar-based cruise control to Japanese drivers in the mid-1990s. Mercedes-Benz took the technology global with its Distronic adaptive cruise control in the 1998 S-Class. Cadillac followed that two years later with FLIR-based night vision on the 2000 Deville DTS. And in 2003, Toyota launched an automated parallel parking technology called Intelligent Parking Assist on the
Sensata Technologies' booth at this year's IAA Transportation tradeshow included two of the company's Precor radar sensors. The PreView STA79 is a heavy-duty vehicle side-monitoring system launched in May 2024 and designed to comply with Europe-wide blind spot monitoring legislation introduced in June 2024. The PreView Sentry 79 is a front- and rear-monitoring system. Both systems operate on the 79-GHz band as the nomenclature suggests. PreView STA79 can cover up to three vehicle zones: a configurable center zone, which can monitor the length of the vehicle, and two further zones that can be independently set to align with individual customer needs. The system offers a 180-degree field of view to eliminate blind spots along the vehicle sides and a built-in measurement unit that will increase the alert level when turning toward an object even when the turn indicator is not used. The system also features trailer mitigation to reduce false positive alerts on the trailer when turning. The
With increasing emphasis on sustainable mobility and efficient energy use, advanced driver assistance systems (ADAS) may potentially be utilized to improve vehicles’ energy efficiency by influencing driver behavior. Despite the growing adoption of such systems in passenger vehicles for active safety and driver comfort, systematic studies examining the effects of ADAS on human driving, in the context of vehicle energy use, remain scarce. This study investigates the impacts of a driver speed advisory system on energy use in a plug-in hybrid electric vehicle (PHEV) through a controlled experiment using a driving simulator. A mixed urban highway driving environment was reconstructed from digitalizing a real-world route to observe the human driver’s behavior with and without driving assistance. The advisory system provided drivers with an optimized speed profile, pre-calculated for the simulated route to achieve maximum energy efficiency. Participants were instructed to navigate the
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out.
iMotions employs neuroscience and AI-powered analysis tools to enhance the tracking, assessment and design of human-machine interfaces inside vehicles. The advancement of vehicles with enhanced safety and infotainment features has made evaluating human-machine interfaces (HMI) in modern commercial and industrial vehicles crucial. Drivers face a steep learning curve due to the complexities of these new technologies. Additionally, the interaction with advanced driver-assistance systems (ADAS) increases concerns about cognitive impact and driver distraction in both passenger and commercial vehicles. As vehicles incorporate more automation, many clients are turning to biosensor technology to monitor drivers' attention and the effects of various systems and interfaces. Utilizing neuroscientific principles and AI, data from eye-tracking, facial expressions and heart rate are informing more effective system and interface design strategies. This approach ensures that automation advancements
North America's first electric, fully integrated custom cab and chassis refuse collection vehicle - slated for initial customer deliveries in mid-2024 - is equipped with a standard advanced driver-assistance system (ADAS). “A typical garbage truck uses commercial off-the-shelf active safety technologies, but the electrified McNeilus Volterra ZSL was purpose-built with active safety technologies to serve our refuse collection customer,” said Brendan Chan, chief engineer for autonomy and active safety at Oshkosh Corporation, McNeilus' parent company. “We wanted to make the safest and best refuse collection truck out there. And by using cloud-based simulation, we could accelerate the development of ADAS and other technologies,” Chan said in an interview with Truck & Off-Highway Engineering during the 2024 dSPACE User Conference in Plymouth, Michigan.
This paper has been withdrawn by the publisher because of non-attendance and not presenting at WCX 2024.
Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions. We address this need by evaluating the influence of hazards on the resilience of three different lane detection methods in simulation: (1) traditional camera detection using a U-Net algorithm, (2) radar detections using infrastructure-based radar retro-reflectors (RRs
The current approach for new Advanced Driver Assistance System (ADAS) and Connected and Automated Driving (CAD) function development involves a significant amount of public road testing which is inefficient due to the number miles that need to be driven for rare and extreme events to take place, thereby being very costly also, and unsafe as the rest of the road users become involuntary test subjects. A new development, evaluation and demonstration method for safe, efficient, and repeatable development, demonstration and evaluation of ADAS and CAD functions called Vehicle-in-Virtual –Environment (VVE) was recently introduced as a solution to this problem. The vehicle is operated in a large, empty, and flat area during VVE while its localization and perception sensor data is fed from the virtual environment with other traffic and rare and extreme events being generated as needed. The virtual environment can be easily configured and modified to construct different testing scenarios on
Items per page:
50
1 – 50 of 839