As one of the most challenging driving tasks, parking is a common but particularly troublesome problem in large cities. Recently, an excellent solution-automated valet parking (AVP) has become a hot research topic, which allows the driver to leave the vehicle in a drop-off area, while the vehicle driving into the parking slot by itself. For AVP, the precise localization is an indispensable module. However, the global positioning system (GPS) cannot be used in the underground parking lot and the localization method based on lidar is too expensive. In response to solve this problem, we propose a simultaneous localization and mapping system with the semantic information of parking slots (PS-SLAM), which is based on visual-inertial and around view images. First, the calibration of multi-sensors is conducted to obtain their intrinsic and extrinsic parameters. In this way, the around view image and transformation matrices between sensors can be acquired. Then, the ORB-SLAM3 based on visual-inertial information is used to acquire the pose of the vehicle and sparse point cloud map. Next, the parking slot in the around view image is detected by the deep convolutional neural network (DCNN) model called VPS-Net. Finally, a parking-slot association method is devised to associate the detected parking slots with the point cloud map to generate a semantic map. The field experiments are conducted using a wire control chassis with 4 fisheye cameras, an inertial measurement unit (IMU), and a monocular camera. The results show that the proposed visual semantic SLAM system not only can achieve centimeter-level localization in the indoor parking lot but also generate a semantic map with parking slots.