The majority of road accident is due to human error. Advanced Driver Assistance System (ADAS) has the potential to reduce human error and improve driving safety. Customers have shown a growing acceptance for ADAS technology. With the rising demand for safety and comfortable driving experience, the global market for ADAS is expected to grow to $67 billion by 2025.
A reliable ADAS system requires an accurate and robust object-detection system. There is often a trade-off in tuning the system. On one hand, miss-detection can cause accidents; on the other hand, false-detection can result in ghost-braking and harm the driving experience. The ADAS system can access various information from different sources. However, a unified confidence model, which combines different indicators, has not been much studied in the literature. In this paper, we propose a data-driven method, which utilizes the features from radar, camera and the tracking system to produce a high-level confidence model. In addition, different regions regarding the ego vehicle usually have different emphases for detection error based on the system design requirements. And therefore, we can tune towards the design requirements by change the threshold of the classifier based on the region of interest.
The proposed method was validated with real-world driving data and shown a better performance based on the design requirement of the Adaptive Cruise Control (ACC) and Autonomous Emergency Braking (AEB) functions.