Rainfall, as a common trigger condition in the Safety of the Intended
Functionality (SOTIF) framework, can impair autonomous driving perception
systems, leading to unexpected functional failures. However, studies focusing on
sensor performance degradation under natural rainfall conditions are limited,
primarily due to the lack of datasets with detailed rainfall information. To
address this gap, this study present RainSense, a multi-sensor autonomous
driving dataset collected under natural rainfall conditions, featuring
fine-grained rainfall intensity annotations. RainSense was recorded at nine
representative intersection scenarios in the campus, where a single dummy target
was placed at various distances as a detection target. A laser-optical
disdrometer was deployed to continuously measure rainfall intensity (mm/h),
while camera images, lidar point clouds, and 4D radar data were synchronously
collected under different rainfall levels. In total, the dataset comprises 728
cases, including 145 with clear condition, 214 with light rain, 204 with
moderate rain, 98 with heavy rain, and 67 with torrential rain. Each case is
segmented into 10-second windows and includes 2D and 3D bounding box labels of
the dummy target. To investigate how rainfall affects different perception
modalities, perception metrics were applied to each sensor type. Results reveal
that under heavy and torrential rain, camera images suffer from blur, while
lidar experiences sparse and weakened point returns, both leading to substantial
perception degradation. In contrast, radar shows minimal variation across all
rain levels, maintaining stable signal characteristics and demonstrating strong
resilience to adverse weather conditions. The dataset and benchmark suite will
be released open-source at: https://github.com/IVtest-Lab/RainSense.git.