Browse Topic: Level 2 (Partial driving automation)

Items (23)
The advent of Vehicle-to-Everything (V2X) communication has revolutionized the automotive industry, particularly with the rise of Advanced Driver Assistance Systems (ADAS). V2X enables vehicles to communicate not only with each other (V2V) but also with infrastructure (V2I) and pedestrians (V2P), enhancing road safety and efficiency. ADAS, which includes features like adaptive cruise control and automatic intersection navigation, relies on V2X data exchange to make real-time decisions and improve driver assistance capabilities. Over the years, the progress of V2X technology has been marked by standardization efforts, increased deployment, and a growing ecosystem of connected vehicles, paving the way for safer and more efficient automated navigation. The EcoCAR Mobility Challenge was a 4-year student competition among 12 universities across the United States and Canada sponsored by the U.S. Department of Energy, MathWorks, and General Motors, where each team received a 2019 Chevrolet
Chowduri, SuhritMidlam-Mohler, ShawnSingh, Karun Prateek
One of the primary reasons for road accidents is driving while distracted or drowsy. Often, long and monotonous road journeys lead to distracted or drowsy driving. Therefore, there is a need for a system which alerts a distracted or drowsy driver. Moreover, as the levels of autonomy move beyond SAE Level 2, the system assumes a larger share of the dynamic driving task. Under challenging circumstances, the system might ask the driver to take back vehicle control. To guarantee safety, it’s crucial to monitor the driver’s condition in order to assess their readiness to regain control of the vehicle. An advanced safety feature known as a driver monitoring system (DMS), sometimes referred to as a driver state sensing (DSS) system, is designed to monitor a driver’s attentiveness and alertness, providing warnings or alerts to refocus their attention on driving when drowsiness or distraction is detected. This paper presents a novel camera-based driver drowsiness system developed using a
Bhagat, AjinkyaKale, Jyoti GaneshPachhapurkar, NinadKarle, ManishKarle, Ujjwala
The cooperative platoon of multiple trucks with definite proximity has the potential to enhance traffic safety, improve roadway capacity, and reduce fuel consumption of the platoon. To investigate the truck platooning performance in a real-world environment, two Peterbilt class-8 trucks equipped with cooperative truck platooning systems (CTPS) were deployed to conduct the first-of-its-kind on-road commercial trial in Canada. A total of 41 CTPS trips were carried out on Alberta Highway 2 between Calgary and Edmonton during the winter season in 2022, 25 of which were platooning trips with 3 to 5 sec time gaps. The platooning trips were performed at ambient temperatures from −24 to 8°C, and the total truck weights ranged from 16 to 39 tons. The experimental results show that the average time gap error was 0.8 sec for all the platooning trips, and the trips with the commanded time gap of 5 sec generally had the highest variations. The average number of disengagements increased when the
Jiang, LuoKheyrollahi, JavadKoch, Charles RobertShahbakhti, Mahdi
This SAE Recommended Practice presents a method and example results for determining the Automotive Safety Integrity Level (ASIL) for automotive motion control electrical and/or electronic (E/E) systems. The ASIL determination activity is required by ISO 26262-3, and it is intended that the process and results herein are consistent with ISO 26262. The technical focus of this document is on vehicle motion control systems. The scope of this SAE Recommended Practice is limited to collision-related hazards associated with motion control systems. This SAE Recommended Practice focuses on motion control systems since the hazards they can create generally have higher ASIL ratings, as compared to the hazards non-motion control systems can create. Because of this, the Functional Safety Committee decided to give motion control systems a higher priority and focus exclusively on them in this SAE Recommended Practice. ISO 26262 has a wider scope than SAE J2980, covering other functions and accidents
Functional Safety Committee
ADAS and HMI development are new applications for simulation solutions. The concept of designing, engineering and manufacturing a new vehicle without physical prototypes is typically viewed as either impractical or mythical. Even as virtual development processes have become increasingly capable, experts maintain that hard prototypes are still needed to validate the fidelity of virtual models. But “zero prototypes” is more than a slogan at one of the top providers of real-time simulation and driving simulator solutions. For VI-grade, zero prototypes are a crusade
Brooke, Lindsay
Mercedes-Benz developed an in-house computer operating system to join an all-new vehicle platform architecture to enhance automated driving, OTA updates and other features. Mercedes-Benz revealed in late February that it is developing its own computer operating system, dubbed MB.OS, which it said will be standardized across the company's entire model portfolio when deployment begins “mid-decade” in concert with the introduction of the equally new Mercedes Modular Architecture (MMA) vehicle platform. The MB.OS will have full access to all vehicle domains, including infotainment, automated driving, body and comfort, vehicle dynamics and battery charging. Based on a chip-to-cloud architecture, the company asserted MB.OS “is designed to connect the major aspects of the company's value chain, including development, production, omni-channel commerce and services - effectively making it an operating system for the entire Mercedes-Benz business.” The MB.OS architecture is completely updateable
Visnic, Bill
The automaker's recall of its Full Self Driving Beta leaves a significant dent in automated driving's credibility. On February 16, 2023, the National Highway Traffic Safety Administration announced that Tesla had voluntarily agreed to recall 362,758 Model S, Model X, Model 3 and Model Y vehicles - the entire parc of Tesla models fitted with the beta version of the company's Full Self-Driving (FSD) Beta software. The NHTSA cited FSD's failure to safely operate Tesla vehicles in a variety of common driving situations - while many industry sources contended the recall was proof Tesla no longer could stay one step ahead of the sheriff regarding its insinuations about FSD's capabilities. The tension about Tesla's automated-driving features had been building. In the summer of 2021, the NHTSA started investigating several crashes in which Teslas operating with the Autopilot ADAS system (standard on all models) struck parked emergency vehicles. A subsequent NHTSA report said there were 273
Visnic, Bill
One chip, multiple benefits. That's the claim made by U.S. semiconductor company Qualcomm Technologies Inc. about its new, scalable system-on-a-chip (SoC) product family, called Snapdragon Ride Flex. Unveiled at CES2023 and due to enter the market in early 2024, Snapdragon Flex is the auto industry's first scalable family of SoCs that can run a digital cockpit and ADAS features simultaneously, according to the company. Snapdragon Ride Flex is the latest member of the Snapdragon SoC family. Qualcomm's first-generation Ride Platforms are currently available in commercialized vehicles. Newer generations, which include the Ride Vision stack that can handle ADAS applications, are being tested by Tier 1s. They are expected to arrive on MY2025 vehicles from various OEMs, according to Qualcomm
Blanco, Sebastian
“The future happened yesterday” is an appropriate description of the rapid pace of development in automated-driving technology. The expression may be most accurate in sensor tech where, for most OEMs (except Tesla thus far), radar and lidar increasingly are considered an essential duo for enhanced automated driving beyond SAE Level 2, and of course for full Level 4-5 automation. Current lidar is transitioning from electro-mechanical systems to solid-state devices. Considered by industry experts to be the technology's inevitable future, Lidar 2.0 is next-generation 3D sensing that is software-defined, solid-state and scalable. Engineers believe those capabilities will make lidar ubiquitous by reducing costs, speeding innovation and improving user experience
Dinkel, John
Every new industry sector goes through a consolidation process where the strongest survive, and so it is with automated and autonomous driving technologies. The recent shuttering of Argo AI, one of the autonomous-vehicle industry's leading tech companies, by Ford and Volkswagen might come as a surprise to commuters in San Francisco and in Phoenix, Arizona. Those who regularly use the robotaxi services of GM-backed Cruise Automation and Alphabet's Waymo see these and other AVs under development during their daily travels. On public roads. Every day. Indeed, Argo AI's demise (which insiders said was mainly due to friction among Ford and VW) and difficulties at other startups including AV pioneer Aurora, have highlighted the engineering challenges of safely achieving SAE Level 4 driving automation, while reinforcing AV critics. But as Guidehouse Insights' leading e-Mobility analyst Sam Abuelsamid notes in his Navigator column on page 3, the AV sector's leaders appear to be moving out
Brooke, Lindsay
As the level of automation is increasing, there is more sensing, processing of complex algorithms and actuation in the system. The Safety of intended functionality (SOTIF) becomes more and more relevant that address the functional insufficiencies or performance limitations of Autonomous functions. The functional insufficiencies/performance limitations can lead to undesired behaviors of the vehicle function for e.g., the system intervenes when there are no critical situations due to False positive scenarios which may lead to undesired braking, or the system does not react in a critical situation due to false negative scenarios which may lead to no braking when it is required to brake. To address these situations in the operational system, we develop SOTIF compliant system by identifying SOTIF risks and developing suitable measures to mitigate the identified risks. It is also necessary to Validate the system in right vehicle environment to confirm all the mitigation measures are
Krishnan, ShymaVenkatesh, Praveen Kumar
The latest edition of the US Department of Energy's (DOE) Advanced Vehicle Technology Competition (AVTC) series is the EcoCAR Mobility Challenge (EMC). In the third year of the EMC, the Mississippi State University (MSU) team developed and tested a perception system and a longitudinal controller to achieve SAE level 2 autonomy. Our team leveraged the model-based design approach to iterate between developing software components and executing tests in multiple environments in the loop (XIL) to verify that design requirements are met. This workflow allowed us to detect and resolve issues early in the development process. The perception system is composed of a sensor fusion and tracking algorithm. It relies on detections from a front facing camera and radar to generate tracks for a leading vehicle. The tracks from the perception system are used by a model predictive controller (MPC) to maintain a safe distance to the leading vehicle. A comparison study between test results from different
Taoudi, AmineGandy, JonahHudson, VanceLuo, ChaominFollett, Randolph
Teammate Advanced Drive is a driving support system with state-of-the-art automated driving technology that has been developed for customers’ safe and secure driving on highways based on the Toyota’s Mobility Teammate Concept. This SAE Level 2 (L2) system assists overtaking, lane changes, and branching to the destination, in addition to providing hands-free lane centering and car following. The automated driving technology includes self-localization onto a High Definition Map, multi-modal sensing to cover 360 degrees of the surrounding environment using fusion of LiDARs, cameras, and radars, and a redundant architecture to realize fail-safe operation when a malfunction or system limitation occurs. High-performance computing is provided to implement deep learning for predicting and responding to various situations that may be encountered while driving. The system also includes digital data uploading and downloading capabilities wirelessly over-the-air (OTA) in order to provide customers
Kawasaki, TomoyaItabashi, KaijiCaveney, DerekKitago, MasakiNara, YuichiroOda, Takashi
GM's latest SAE Level 2 system will enable hands-free operation in “95% of driving scenarios.” GM recently announced the next generation of its hands-free Super Cruise advanced driver-assist system (ADAS), upping the label to “Ultra Cruise.” GM claims the Ultra Cruise system, expected to appear first on Cadillac models in 2023, will ultimately enable hands-free driving on all paved public roads in the U.S. and Canada in “95% of driving scenarios.” At launch next year, the system is expected to cover more than 2 million miles of roads, with the capacity to grow to more than 3.4 million miles. Describing the system as a “door-to-door hands-free driving experience,” GM claims owners of Ultra Cruise-equipped vehicles will be able to travel hands-free across nearly every road, including highways, city and subdivision streets, along with paved rural routes. GM noted that its system has been developed completely in-house (via collaborating teams based in Israel, the U.S., Canada, and Ireland
Seredynski, Paul
In this study we collect and analyze data on how hands-free automated lane centering systems affect the controllability of a hazardous event during an operational situation by a human operator. Through these data and their analysis, we seek to answer the following questions: Is Level 2 and Level 3 automated driving inherently uncontrollable as a result of a steering failure? Or, is there some level of operator control of hazardous situations occurring during Level 2 and Level 3 automated driving that can reasonably be expected, given that these systems still rely on a driver as the primary fall back. The controllability focus group experiments were carried out using an instrumented MY15 Jeep® Cherokee with a prototype Level 2 automated driving system that was modified to simulate a hands-free steering system on a closed track with speeds up to 110kph. The vehicle was also fitted with supplemental safety measures to ensure experimenter control. The lateral controllability study was
Garbacik, NeilMastory, ConstantineNguyen, HungYadav, ShashikantLlaneras, RobertMcCall, Robert
As vehicles with SAE level 2 of autonomy become more widely deployed, they still rely on the human driver to monitor the driving task and take control during emergencies. It is therefore necessary to examine the Human Factors affecting a driver’s ability to recognize and execute a steering or pedal action in response to a dangerous situation when the autonomous system abruptly requests human intervention. This research used a driving simulator to introduce the concept of level 2 autonomy to a cohort of 60 drivers (male: 48%, female: 52%) of different age groups (teens 16 to 19: 32%, adults: 35 to 54: 37%, seniors 65+: 32%). Participants were surveyed for their perspectives on self-driving vehicles. They were then assessed on a driving simulator that mimicked SAE level 2 of autonomy. Participants’ interaction with the HMI was studied. A real-life scenario was programmed so that a request to intervene was issued when automation reached its boundaries while navigating a two-way curve road
Loeb, Helen S.Vo-Phamhi, ElizabethSeacrist, ThomasMaheshwari, JalajYang, Christopher
The existing intelligent cruising assist system lacks comprehensive and objective test and evaluation scenarios. First, this paper analyzes the existing standard related to intelligent cruising assist system. Then, based on the current natural driving data, the paper analyzes the driving behavior of drivers, and extracts the information of speed, distance and road conditions in the Cruising scenario. Finally, this paper designed the test and evaluation scenarios of intelligent cruising assist system: the longitudinal control ability of one lane, the lateral control ability of one lane, the longitudinal and lateral control ability of one lane, Automatic lane change ability. The test and evaluation scenarios designed in this paper are used for the test and evaluation of intelligent cruising assist system at L2 level
Junfu, HuangQiang, ZhangChaobin, LiChunhong, XinKan, YiLiangyi, Yang
There is significant potential for connected and autonomous vehicles to impact vehicle efficiency, fuel economy, and emissions, especially for hybrid-electric vehicles. These improvements could have large-scale impact on oil consumption and air-quality if deployed in large Mobility-as-a-Service or ride-sharing fleets. As part of the US Department of Energy's current Advanced Vehicle Technology Competition (AVCT), EcoCAR: The Mobility Challenge, Mississippi State University’s EcoCAR Team is redesigning and doing the development work necessary to convert a conventional gasoline spark-ignited 2019 Chevy Blazer into a hybrid-electric vehicle with SAE Level 2 autonomy. The target consumer segments for this effort are the Mobility-as-a-Service fleet owners, operators and riders. To accomplish this conversion, the MSU team is implementing a P4 mild hybridization strategy that is expected to result in a 30% increase in fuel economy over the stock Blazer. MATLAB models of the vehicle system
Taoudi, AmineHaque, Moinul ShahidulStrzelec, AndreaFollett, Randolph
With the rapid development of artificial intelligence, autonomous driving technology will finally reshape an automotive industry. Although fully autonomous cars are not commercially available to common consumers at this stage, partially autonomous vehicles, which are defined as level 2 and level 3 autonomous vehicles by SAE J3016 standard, are widely tested by automakers and researchers. A typical Human-Machine-Interface (HMI) for a vehicle takes a form to support a human domination role. Although modern driving assistance systems allow vehicles to take over control at certain scenarios, the typical human-machine-interface has not changed dramatically for a long time. With deep learning neural network technologies penetrating into automotive applications, multi-modal communications between a driver and a vehicle can be enabled by a cost-effective solution. The multi-modal human-machine-interface will allow a driver to easily interact with autonomous vehicles, supporting smooth
Ge, XinyuLi, XinyuWang, Ying
The advancement towards development of autonomy follows either the bottom-up approach of gradually improving and expanding existing Advanced Driver Assist Systems (ADAS) technology where the driver is present in the control loop or the top-down approach of directly developing Autonomous Vehicles (AV) hardware and software using alternative approaches without the driver present in the control loop. Most ADAS systems today fall under the classification of SAE Level 1 which is also referred to as the driver assistance level. The progression from SAE Level 1 to SAE Level 2 or partial automation involves the critical task of merging autonomous lateral control and autonomous longitudinal control such that the tasks of steering and acceleration/deceleration are not required to be handled by the driver under certain conditions [1]. However, the driver is still required to monitor the driving environment and handle scenarios where control is handed over to the driver due to subsystem faults of
Joshi, Adit
In this work, we outline a process for traffic light detection in the context of autonomous vehicles and driver assistance technology features. For our approach, we leverage the automatic annotations from virtually generated data of road scenes. Using the automatically generated bounding boxes around the illuminated traffic lights themselves, we trained an 8-layer deep neural network, without pre-training, for classification of traffic light signals (green, amber, red). After training on virtual data, we tested the network on real world data collected from a forward facing camera on a vehicle. Our new region proposal technique uses color space conversion and contour extraction to identify candidate regions to feed to the deep neural network classifier. Depending on time of day, we convert our RGB images in order to more accurately extract the appropriate regions of interest and filter them based on color, shape and size. These candidate regions are fed to a deep neural network. In this
Moosaei, MaryamZhang, YiMicks, AshleySmith, SimonGoh, Madeline J.Nariyambut Murali, Vidya
Items per page:
1 – 23 of 23