Decentralized perception system with multiple viewpoints
2025-01-8097
To be published on 04/01/2025
- Event
- Content
- Vehicle-to-Infrastructure (V2I) cooperation has emerged as a fundamental technology to overcome the limitations of the individual ego-vehicle perception. Onboard perception is limited by the lack of information for understanding the environment, the lack of anticipation, the drop of performance due to occlusions and the physical limitations of embedded sensors. V2I perception in a cooperative manner improves the ego-vehicle perception range by receiving information from the infrastructure that has another point of view, mounted with sensors, such as camera and LiDAR. This technical paper presents a perception pipeline developed for the infrastructure, based on images with multiple viewpoints. It is designed to be scalable and has five main components: the image acquisition for the modification of camera settings and to get the pixel data, the object detection for fast and accurate detection of four wheels, two wheels and pedestrians, the data fusion module for robust fusion of the 2D bounding boxes from multiple viewpoints, the object tracking to get the history of movement for each object over time and the generation of perception message for V2I communication. The infrastructure-based solution has been implemented and demonstrated in real-world scenarios, including two different intersections with up to six mounted cameras to cover an extended area. The qualitative results show that detected objects have high accuracy with similar performances between two different environments, which proves the scalability of the solution. With a not optimized setup for these first deployments, we observe for the whole pipeline an execution time between 226ms and 256ms depending on the number of objects to be fused in the map based on the processing of six cameras.
- Citation
- Picard, Q., Fadili, M., Morice, M., and Pechberti PhD, S., "Decentralized perception system with multiple viewpoints," SAE Technical Paper 2025-01-8097, 2025, .