Browse Topic: Advanced driver assistance systems (ADAS)
The rapid advancement of advanced driver assistance systems (ADAS), automated driving and electrification has significantly increased the software content and complexity within modern vehicles. Consequently, ensuring both high process quality and compliance or qualification with functional safety standards becomes critically important. Automotive Software Process Improvement and Capability Determination (ASPICE 4.0) focus on Process quality and Capability Maturity, while ISO 26262:2018 emphasizes engineering guidelines for functional safety and risk mitigation. The efficient integration of the process and standard remains a key challenge due to differences in their objectives, terminologies, and assessment criteria. The misalignment between ASPICE 4.0 and ISO 26262:2018 standard often results in duplicated efforts, rework of work products, and delays in product release schedules. This paper proposes a unified framework to bridge ASPICE 4.0 process areas with ISO 26262:2018 safety
Despite remarkable advances in vehicle technology - enhancing comfort, safety, and automation – productivity of transportation over the road continues to decline. Stop-and-go driving remains one of the most persistent inefficiencies in modern mobility systems, leading to greater travel delays, energy waste, emissions, and accident risk. As vehicle volumes rise, these effects compound into systemic challenges, including driver frustration, unstable flow dynamics, and elevated greenhouse gas (GHG) emissions. To address these issues, an extensive data-driven evaluation was performed characterizing the underlying causes of traffic instability and uncovering hidden behavioral parameters influencing traffic flow. This research led to the identification of a previously unrecognized metric - the Driver Comfort Index (DCI) - which quantifies an inter-vehicle spacing behavior that reflects intrinsic human driving behavior. Building on this discovery, mixed traffic is explored to identify its
Achieving full vehicle autonomy is not just about adding sensors or compute - it requires a fundamental shift in how vehicles are architected. Autonomous systems rely on higher-resolution sensors, massive processing power, and the ability to fuse data from multiple sources in real time. Centralized in-vehicle architectures, which consolidate computing and enable sensor fusion, place unprecedented demands on connectivity. Precise time synchronization across systems becomes critical, as does advanced control to ensure safe and reliable operation. Any delay or data loss can impact decision-making, making robust, resilient communication links essential. High-performance connectivity is the backbone of this evolution. It must deliver the highest bandwidth to handle massive streams of sensor data, support long-reach connections across the vehicle, and maintain error-free performance even in the most challenging electromagnetic environments. This combination of speed, reach, and reliability
Edge detection is fundamental for intelligent vehicle applications, directly supporting ADAS functions such as lane detection, obstacle recognition, and scene understanding. The conventional Canny edge detection method exhibits notable shortcomings, especially in color-image processing, adaptive threshold selection, and preserving edge integrity under noisy conditions. In this study, we present an enhanced Canny edge detection framework tailored for ADAS-oriented intelligent vehicle systems, incorporating a quaternion-based weighted averaging scheme for color preservation, adaptive thresholds derived from gradient-amplitude histograms, multiscale edge localization via scale multiplication, and a novel gravitational-field-intensity operator for improved gradient robustness. Moreover, we extend the method to vanishing-point estimation an essential ADAS capability by performing precise intersection calculations combined with clustering techniques such as DBSCAN and RANSAC. Experimental
Autonomous platforms such as self-driving vehicles, advanced driver-assistance systems (ADAS), and intelligent aerial drones demand real-time video perception systems capable of delivering actionable visual information at ultra-low latency. High-resolution vision pipelines are often hindered by delays introduced at multiple stages—sensor acquisition, video encoding, data transmission, decoding, and display—undermining the responsiveness required for safety-critical decision making. This study introduces a holistic system-level optimization framework that systematically reduces end-to-end video latency while maintaining image fidelity and perception accuracy. The proposed approach integrates hardware-accelerated encoding, zero-copy direct memory access (DMA), lightweight UDP-based RTP transport, and GPU-accelerated decoding into a unified pipeline. By minimizing redundant memory copies and software bottlenecks, the system achieves seamless data flow across hardware and software
In order to achieve fully autonomous driving, point to point autonomous navigation is the most important task. Most existing end-to-end models output a short-horizon path which makes the decision process hard to interpret and unreliable at intersections and complex driving scenarios. In this research, we build a navigation-integrated end-to-end path planner on top of an openpilot open source model. We created a navigation branch that encodes route polyline geometry, distance-to-next-maneuver, and high-level instructions and combines with path plan branch using residual blocks and feed-forward layers. By adding minimal parameters, new model keeps the original openpilot tasks unchanged and have the path output based on the navigation information. The model is trained on diverse urban scenes’ intersections, and it shows improved route performance in vehicle testing. The proposed model is validated in a Comma 3x device installed on a 2025 Nissan Leaf test vehicle. The road test results
Dooring accidents occur when a vehicle door is opened into the path of an approaching cyclist, motorcyclist, or other road user, often causing serious collisions and injuries. These incidents are a major road safety concern, particularly in densely populated urban areas where heavy traffic, narrow roads, and inattentive behavior increase the likelihood of such events. To address this challenge, this project presents an intelligent computer vision based warning system designed to detect approaching vehicles and alert occupants before they open a door. The system can operate using either the existing rear parking camera in a vehicle or a USB webcam in vehicles without such a feature. The captured live video stream is processed by a Raspberry Pi 4 microprocessor, chosen for its compact size, low power consumption, and ability to support machine learning frameworks. The video feed is analyzed in real time using MobileNetSSD, a lightweight deep learning object detection model optimized
Treat foundational AV safety like seatbelts - make it non-proprietary and universal. An open safety stack, shared scenarios, benchmarks, and core validation tools can speed certification, reduce duplicated V&V and build public trust while preserving vendor differentiation. The bottleneck isn't compute - it's verification. Autonomous features are shipping in more vehicles and markets, but the gating factor is no longer raw compute. It's whether developers and regulators can verify systems against requirements and validate them against real-world operating design domains (ODDs) with confidence and repeatability. Today, many safety-critical components, from scenario libraries to pass/fail criteria, live in proprietary silos. That fragmentation slows regression testing, complicates regulator audits across regions, and duplicates effort across the industry. The result is an expensive, bespoke path to certification for every program and geography.
Items per page:
50
1 – 50 of 1357