This content is not included in your SAE MOBILUS subscription, or you are not logged in.
Camera-Radar Data Fusion for Target Detection via Kalman Filter and Bayesian Estimation
ISSN: 0148-7191, e-ISSN: 2688-3627
Published August 07, 2018 by SAE International in United States
This content contains downloadable datasetsAnnotation ability available
Target detection is essential to the advanced driving assistance system (ADAS) and automatic driving. And the data fusion of millimeter wave radar and camera could provide more accurate and complete information of targets and enhance the environmental perception performance. In this paper, a method of vehicle and pedestrian detection based on the data fusion of millimeter wave radar and camera is proposed to improve the target distance estimation accuracy. The first step is the targets data acquisition. A deep learning model called Single Shot MultiBox Detector (SSD) is utilized for targets detection in consecutive video frames captured by camera and further optimized for high real-time performance and accuracy. Secondly, the coordinate system of camera and radar are unified by coordinate transformation matrix. Then, the parallel Kalman filter is used to track the targets detected by radar and camera respectively. Since targets data provided by the camera and radar are different, different Kalman filters are designed to achieve the tracking process. Finally, the targets data are fused based on Bayesian Estimation. At first, several simulation experiments were designed to test and optimize the proposed method, then the real data was used to prove further. Through experiments, it shows that the measurement noise can be considerably reduced by Kalman filter and the fusion algorithm could improve the estimation accuracy.
CitationYu, Z., Bai, J., Chen, S., Huang, L. et al., "Camera-Radar Data Fusion for Target Detection via Kalman Filter and Bayesian Estimation," SAE Technical Paper 2018-01-1608, 2018, https://doi.org/10.4271/2018-01-1608.
Data Sets - Support Documents
|[Unnamed Dataset 1]|
|[Unnamed Dataset 2]|
|[Unnamed Dataset 3]|
|[Unnamed Dataset 4]|
|[Unnamed Dataset 5]|
- Wang, X., Xu, L., Sun, H. et al., “Bionic Vision Inspired On-Road Obstacle Detection and Tracking Using Radar and Visual Information,” IEEE International Conference on Intelligent Transportation Systems, 2014, 39-44. IEEE.
- Han, S, Wang, X, Xu, L. et al., “Frontal Object Perception for Intelligent Vehicles Based on Radar and Camera Fusion,” Control Conference, 2016, 4003-4008. IEEE.
- Alencar, F.A.R., Rosero, L.A., Massera Filho, C. et al., “Fast Metric Tracking by Detection System: Radar Blob and Camera Fusion,” Robotics Symposium (LARS) and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), 2015 12th Latin American, 2015,120-125. IEEE.
- Bi, X., Tan, B., Xu, Z. et al., “A New Method of Target Detection Based on Autonomous Radar and Camera Data Fusion,” Intelligent and Connected Vehicles Symposium, 2017.
- Liu, W., Anguelov, D., Erhan, D. et al., “SSD: Single Shot MultiBox Detector,” European Conference on Computer Vision, 2016, 21-37. Springer, Cham.
- Geiger, A., Lenz, P., Urtasun, R., “Are We Ready for Autonomous Driving the Kitti Vision Benchmark Suite,” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, 2012, 3354-3361. IEEE.
- Arróspide, J., Salgado, L., and Nieto, M., “Video Analysis Based Vehicle Detection and Tracking Using an MCMC Sampling Framework,” EURASIP Journal on Advances in Signal Processing, 2012, Article ID 2012:2, Jan. 2012, doi:10.1186/1687-6180-2012-2.
- Gonzalez, A., Fang, Z., Socarras, Y., Serrat, J., Vazquez, D., Xu, J., and Lopez, A., “Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison,” In Sensors Journal (Sensors), in press, 2016.
- Kumar, M., Garg, D.P., Zachery, R., “A Generalized Approach for Inconsistency Detection in Data Fusion from Multiple Sensors, American Control Conference, 2006, 6pp. IEEE Xplore.