To meet the requirements of high-precision and stable positioning for autonomous
driving vehicles in complex urban environments, this paper designs and develops
a multi-sensor fusion intelligent driving hardware and software system based on
BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware
platform construction and practical verification within multi-sensor fusion
technology. Although multi-sensor fusion positioning algorithms have made
significant progress in recent years, their application and validation on real
hardware platforms remain limited. To address this issue, the system integrates
BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability
through an optimized layout design and improving hardware structure to
accommodate real-time data acquisition and processing in complex environments.
The system’s software design is based on factor graph optimization algorithms,
which use the global positioning data provided by BDS to constrain the drift of
IMU and LiDAR data, ensuring that the system can maintain accurate positioning
through IMU and LiDAR collaboration, even when GNSS signals are limited or
completely unavailable. Experimental results show that the system’s 3D
positioning error in shaded environments is controlled within 7 cm, with a
convergence time of no more than 40 seconds. Further statistical analysis
reveals a root mean square error (RMSE) of approximately 8 cm and a standard
deviation (STD) of 2 cm. During the simulated indoor-outdoor scene transition
test, the system’s relative pose error remains stable within 10 cm,
demonstrating its adaptability and robustness in diverse and complex scenarios.
This study provides a technical reference for the hardware construction and
system validation of multi-sensor fusion technology on autonomous driving
platforms.