Browse Topic: Driving automation
Letter from the Guest Editors
This document describes machine-to-machine (M2M)1 communication to enable cooperation between two or more traffic participants or CDA devices hosted or controlled by said traffic participants. The cooperation supports or enables performance of the dynamic driving task (DDT) for a subject vehicle equipped with an engaged driving automation system feature and a CDA device. Other participants may include other vehicles with driving automation feature(s) engaged, shared road users (e.g., drivers of conventional vehicles or pedestrians or cyclists carrying compatible personal devices), or compatible road operator devices (e.g., those used by personnel who maintain or operate traffic signals or work zones). Cooperative driving automation (CDA) aims to improve the safety and flow of traffic and/or facilitate road operations by supporting the safer and more efficient movement of multiple vehicles in proximity to one another. This is accomplished, for example, by sharing information that can be
The rapid development of autonomous vehicles necessitates rigorous testing under diverse environmental conditions to ensure their reliability and safety. One of the most challenging scenarios for both human and machine vision is navigating through rain. This study introduces the Digitrans Rain Testbed, an innovative outdoor rain facility specifically designed to test and evaluate automotive sensors under realistic and controlled rain conditions. The rain plant features a wetted area of 600 square meters and a sprinkled rain volume of 600 cubic meters, providing a comprehensive environment to rigorously assess the performance of autonomous vehicle sensors. Rain poses a significant challenge due to the complex interaction of light with raindrops, leading to phenomena such as scattering, absorption, and reflection, which can severely impair sensor performance. Our facility replicates various rain intensities and conditions, enabling comprehensive testing of Radar, Lidar, and Camera
The rapid development of open-source Automated Driving System (ADS) stacks has created a pressing need for clear guidance on their evaluation and selection for specific use cases. This paper introduces a scenario-based evaluation framework combined with a modular simulation framework, offering a scalable methodology for assessing and benchmarking ADS solutions, including but not limited to off-the-shelf designs. The study highlights the lack of clear Operational Design Domain (ODD) descriptions in such systems. Without a common understanding, users must rely on subjective assumptions, which hinders the process of accurate system selection. To address this gap, the study proposes adopting a standardised ISO 34503 ODD description format within the ADS stacks. The application of the proposed framework is showcased through a case study evaluating two open-source systems, Autoware and Apollo. By first defining the assumed system’s ODD, then selecting a relevant scenario, and establishing
Reproducing driving scenarios involving near-collisions and collisions in a simulator can be useful in the development and testing of autonomous vehicles, as it provides a safe environment to explore detailed vehicular behavior during these critical events. CARLA, an open-source driving simulator, has been widely used for reproducing driving scenarios. CARLA allows for both manual control and traffic manager control (the module that controls vehicles in autopilot manner in the simulation). However, current versions of CARLA are limited to setting the start and destination points for vehicles that are controlled by traffic manager, and are unable to replay precise waypoint paths that are collected from real-world collision and near-collision scenarios, due to the fact that the collision-free pathfinding modules are built into the system. This paper presents an extension to CARLA’s source code, enabling the replay of exact vehicle trajectories, irrespective of safety implications
Safety Management Systems (SMSs) have been used in many safety-critical industries and are now being developed and deployed in the automated driving system (ADS)-equipped vehicle (AV) sector. Industries with decades of SMS deployment have established frameworks tailored to their specific context. Several frameworks for an AV industry SMS have been proposed or are currently under development. These frameworks borrow heavily from the aviation industry although the AV and aviation industries differ in many significant ways. In this context, there is a need to review the approach to develop an SMS that is tailored to the AV industry, building on generalized lessons learned from other safety-sensitive industries. A harmonized AV-industry SMS framework would establish a single set of SMS practices to address management of broad safety risks in an integrated manner and advance the establishment of a more mature regulatory framework. This paper outlines a proposed SMS framework for the AV
As the autonomy of ADAS features are moving from SAE level 0 autonomy to SAE level 5 autonomy of operation, reliance on AI/ML based algorithms in ADAS critical functions like perception, fusion and path planning are increasing predominantly. AI/ML based algorithms offer exceptional performance of the ADAS features, at the same time these advanced algorithms also bring in safety challenges as well. This paper explores the functional safety aspects of AI/ML based systems in ADAS functions like perception, object fusion and path planning, by discussing the safety requirements development for AI/ML systems, dataset safety life cycle, verification and validation of AI systems, and safety analysis used for AI systems. Among all the safety aspects listed above, emphasis is put on dataset safety lifecycle as that is not only the most important element for training ML based algorithms for ADAS usage, but also the most cumbersome and expensive. The safety characteristics associated with dataset
While some developers of autonomous technology for commercial trucks have stalled out, there's renewed energy to deliver augmented ADAS and automated driving systems to mass production. After a tumultuous 2023 that saw several autonomous trucking startups pivot out of or exit the arena entirely, there has been a recent resurgence of investment and efforts to bring the vision of driverless freight fleets to reality. In the wake of firms like Embark, TuSimple and Waymo scaling back or rolling up operations, Aurora, Continental and Knorr-Bremse have all announced continued development of SAE Level 4 systems with the intention to deploy trucks using these systems at scale. OEMs such as Volvo Trucks have also announced updates to existing technologies that will augment current advanced driver-assistance systems (ADAS) to help human drivers become safer behind the wheel.
This SAE Edge Research Report explores advancements in next-generation mobility, focusing on digitalized and smart cockpits and cabins. It offers literature review, examining current customer experiences with traditional vehicles and future mobility expectations. Key topics include integrating smart cockpit and cabin technologies, addressing challenges in customer and user experience (UX) in digital environments, and discussing strategies for transitioning from traditional vehicles to electric ones while educating customers. User Experience for Digitalized and Smart Cockpits and Cabins of Next-gen Mobility covers both on- and off-vehicle experiences, analyzing complexities in developing and deploying digital products and services with effective user interfaces. Emphasis is placed on meeting UX requirements, gaining user acceptance, and avoiding trust issues due to poor UX. Additionally, the report concludes with suggestions for improving UX in digital products and services for future
Autonomous vehicles utilise sensors, control systems and machine learning to independently navigate and operate through their surroundings, offering improved road safety, traffic management and enhanced mobility. This paper details the development, software architecture and simulation of control algorithms for key functionalities in a model that approaches Level 2 autonomy, utilising MATLAB Simulink and IPG CarMaker. The focus is on four critical areas: Autonomous Emergency Braking (AEB), Adaptive Cruise Control (ACC), Lane Detection (LD) and Traffic Object Detection. Also, the integration of low-level PID controllers for precise steering, braking and throttle actuation, ensures smooth and responsive vehicle behaviour. The hardware architecture is built around the Nvidia Jetson Nano and multiple Arduino Nano microcontrollers, each responsible for controlling specific actuators within the drive-by-wire system, which includes the steering, brake and throttle actuators. Communication
Items per page:
50
1 – 50 of 651