Browse Topic: Lidar
Lane-keeping is critical for SAE Level 3+ autonomous vehicles, requiring rigorous validation and end-to-end interpretability. All recently U.S.-approved level 3 vehicles are equipped with lidar, likely for accelerating active safety. Lidar offers direct distance measurements, allowing rule-based algorithms compared to camera-based methods, which rely on statistical methods for perception. Furthermore, lidar can support a more comprehensive and detailed approach to studying lane-keeping. This paper proposes a module perceiving oncoming vehicle behavior, as part of a larger behavior-tree structure for adaptive lane-keeping using data from a lidar sensor. The complete behavior tree would include road curvature, speed limits, road types (rural, urban, interstate), and the proximity of objects or humans to lane markings. It also accounts for the lane-keeping behavior, type of adjacent and opposing vehicles, lane occlusion, and weather conditions. The algorithm was evaluated using
Apple’s mobile phone LiDAR capabilities can be used with multiple software applications to capture the geometry of vehicles and smaller objects. The results from different software have been previously researched and compared to traditional ground-based LiDAR. However, results were inconsistent across software applications, with some software being more accurate and others being less accurate. (Technical Paper 2023-01-0614. Miller, Hashemian, Gillihan, Benes.) This paper builds upon existing research by utilizing the updated LiDAR hardware that Apple has added to its iPhone 15 smartphone lineup. This new hardware, in combination with the software application PolyCam, was used to scan a variety of crashed vehicles. These crashed vehicles were also scanned using a FARO 3D scanners and Leica RTC 360 scanners, which have been researched extensively for their accuracy. The PolyCam scans were compared to FARO and Leica scans to determine accuracy for point location and scaling. Previous
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software includes these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software 3ds Max to determine its accuracy for use in accident reconstruction. A parking lot was scanned using a FARO LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment, and photographs were taken at various times throughout the day from the same location. This environment was 3D modeled in 3ds Max based on the point cloud, and the sun system in 3ds Max was configured using the
The accident reconstruction community frequently uses Terrestrial LiDAR (TLS) to capture accurate 3D images of vehicle accident sites. This paper compares the accuracy, workflow, benefits, and challenges of Unmanned Aerial Vehicle (UAV) LiDAR, or Airborne Laser Scanning (ALS), to TLS. Two roadways with features relevant to accident reconstruction were selected for testing. ALS missions were conducted at an altitude of 175 feet and a velocity of 4 miles per hour at both sites, followed by 3D scanning using TLS. Survey control points were established to minimize error during cloud-to- cloud TLS registration and to ensure accurate alignment of ALS and TLS point clouds. After data capture, the ALS point cloud was analyzed against the TLS point cloud. Approximately 80% of ALS points were within 1.8 inches of the nearest TLS point, with 64.8% at the rural site and 59.7% at the suburban site within 1.2 inches. These findings indicate that UAV-based LiDAR can achieve comparable accuracy to TLS
Light Detection and Ranging (LiDAR) is a promising type of sensor for autonomous driving that utilizes laser technology to provide perceptions and accurate distance measurements of obstacles in the vehicle path. In recent years, there has also been a rise in the implementation of LiDARs in modern and autonomous vehicles to aid self-driving features. However, navigating adverse weather remains one of the biggest challenges in achieving Level 5 full autonomy due to sensor soiling, leading to performance degradation that can pose safety hazards. When driving in rain, raindrops impact the LiDAR sensor assembly and cause attenuation of signals when the light beams undergo reflections and refractions. Consequently, signal detectability, accuracy, and intensity are significantly affected. To date, limited studies have been able to perform objective evaluations of LiDAR performance, most of which faced limitations that hindered realistic, controllable, and repeatable testing. Therefore, this
Towards the goal of real-time navigation of autonomous robots, the Iterative Closest Point (ICP) based LiDAR odometry methods are a favorable class of Simultaneous Localization and Mapping (SLAM) algorithms for their robustness under any light conditions. However, even with the recent methods, the traditional SLAM challenges persist, where odometry drifts under adversarial conditions such as featureless or dynamic environments, as well as high motion of the robots. In this paper, we present a motion-aware continuous-time LiDAR-inertial SLAM framework. We introduce an efficient EKF-ICP sensor fusion solution by loosely coupling poses from the continuous time ICP and IMU data, designed to improve convergence speed and robustness over existing methods while incorporating a sophisticated motion constraint to maintain accurate localization during rapid motion changes. Our framework is evaluated on the KITTI datasets and artificially motion-induced dataset sequences, demonstrating
LiDAR sensors have become an integral component in the realm of autonomous driving, widely utilized in environmental perception and vehicle navigation. However, in real-world road environments, contaminants such as dust and dirt can severely hamper the cleanliness of LiDAR optical windows, thereby degrading operational performance and affecting the overall environmental perception capabilities of intelligent driving systems. Consequently, maintaining the cleanliness of LiDAR optical windows is crucial for sustaining device performance. Unfortunately, the scarcity of publicly available LiDAR contamination datasets poses a challenge to the research and development of contamination identification algorithms. This paper first introduces a method for acquiring LiDAR-pollution datasets. LiDAR data acquisition on urban open roads simulates different types of pollution, including mud and leaves. The constructed dataset meticulously differentiates among the three states with clear labels: no
Vehicle-to-Infrastructure (V2I) cooperation has emerged as a fundamental technology to overcome the limitations of the individual ego-vehicle perception. Onboard perception is limited by the lack of information for understanding the environment, the lack of anticipation, the drop of performance due to occlusions and the physical limitations of embedded sensors. The perception of V2I in a cooperative manner improves the perception range of the ego vehicle by receiving information from the infrastructure that has another point of view, mounted with sensors, such as camera and LiDAR. This technical paper presents a perception pipeline developed for the infrastructure based on images with multiple viewpoints. It is designed to be scalable and has five main components: the image acquisition for the modification of camera settings and to get the pixel data, the object detection for fast and accurate detection of four wheels, two wheels and pedestrians, the data fusion module for robust
Roadside perception technology is an essential component of traffic perception technology, primarily relying on various high-performance sensors. Among these, LiDAR stands out as one of the most effective sensors due to its high precision and wide detection range, offering extensive application prospects. This study proposes a voxel density-nearest neighbor background filtering method for roadside LiDAR point cloud data. Firstly, based on the relatively fixed nature of roadside background point clouds, a point cloud filtering method combining voxel density and nearest neighbor is proposed. This method involves voxelizing the point cloud data and using voxel grid density to filter background point clouds, then the results are processed through a neighbor point frame sequence to calculate the average distance of the specified points and compare with a distance threshold to complete accurate background filtering. Secondly, a VGG16-Pointpillars model is proposed, incorporating a CNN
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
In a complex and ever-changing environment, achieving stable and precise SLAM (Simultaneous Localization and Mapping) presents a significant challenge. The existing SLAM algorithms often exhibit limitations in design that restrict their performance to specific scenarios; they are prone to failure under conditions of perceptual degradation. SLAM systems should maintain high robustness and accurate state estimation across various environments while minimizing the impact of noise, measurement errors, and external disturbances. This paper proposes a three-stage method for registering LiDAR point cloud. First, the multi-sensor factor graph is combined with historical pose and IMU pre-integration to provide a priori pose estimation; then a new method for extracting planar features is used to describe and filter the local features of the point cloud. Second, the normal distribution transform (NDT) algorithm is used as coarse registration. Third, the feature to feature registration is used for
The modern-day vehicle’s driverless or driver-assisted systems are developed by sensing the surroundings using a combination of camera, lidar, and other related sensors by forming an accurate perception of the driving environment. Machine learning algorithms help in forming perception and perform planning and control of the vehicle. The control of the vehicle which reflects safety depends on the accurate understanding of the surroundings by the trained machine learning models by subdividing a camera image fed into multiple segments or objects. The semantic segmentation system comes with the objective of assigning predefined class labels such as tree, road, and the like to each pixel of an image. Any security attacks on pixel classification nodes of the segmentation systems based on deep learning result in the failure of the driver assistance or autonomous vehicle safety functionalities due to a falsely formed perception. The security compromisations on the pixel classification head of
During the operation of autonomous mining trucks in the process of crushing stones, the GPS signal is lost due to signal blockage by the crushing workshop. Simultaneous Localization and Mapping (SLAM) becomes critical for ensuring accurate vehicle positioning and smooth operation. However, the bumpy road conditions and the scarcity of plane and corner feature points in mining environments pose challenges to SLAM algorithms in practical applications, such as pose jumps and insufficient positioning accuracy. To address this, this paper proposes a high-precision positioning algorithm based on inertial navigation 3D signals, incorporating point cloud motion distortion correction, a vehicle roll model, and an Adaptive Kalman Filter (AKF). The goal is to improve the positioning accuracy and stability of autonomous mining trucks in complex scenarios. This paper utilizes real-world operational data from mining vehicles and adopts a 3D point cloud motion distortion correction algorithm to
Light detection and ranging (LiDAR) sensors are increasingly applied to automated driving vehicles. Microelectromechanical systems are an established technology for making LiDAR sensors cost-effective and mechanically robust for automotive applications. These sensors scan their environment using a pulsed laser to record a point cloud. The scanning process leads in the point cloud to a distortion of objects with a relative velocity to the sensor. The consecutive generation and processing of points offers the opportunity to enrich the measured object data from the LiDAR sensors with velocity information by extracting information with the help of machine learning, without the need for object tracking. Turning it into a so-called 4D-LiDAR. This allows object detection, object tracking, and sensor data fusion based on LiDAR sensor data to be optimized. Moreover, this affects all overlying levels of autonomous driving functions or advanced driver assistance systems. However, since such
Cooperative perception has attracted wide attention given its capability to leverage shared information across connected automated vehicles (CAVs) and smart infrastructure to address the occlusion and sensing range limitation issues. To date, existing research is mainly focused on prototyping cooperative perception using only one type of sensor such as LiDAR and camera. In such cases, the performance of cooperative perception is constrained by individual sensor limitations. To exploit the multi-modality of sensors to further improve distant object detection accuracy, in this paper, we propose a unified multi-modal multi-agent cooperative perception framework that integrates camera and LiDAR data to enhance perception performance in intelligent transportation systems. By leveraging the complementary strengths of LiDAR and camera sensors, our framework utilizes the geometry information from LiDAR and the semantic information from cameras to achieve an accurate cooperative perception
This project presents the development of an advanced Autonomous Mobile Robot (AMR) designed to autonomously lift and maneuver four-wheel drive vehicles into parking spaces without human intervention. By leveraging cutting-edge camera and sensor technologies, the AMR integrates LIDAR for precise distance measurements and obstacle detection, high-resolution cameras for capturing detailed images of the parking environment, and object recognition algorithms for accurately identifying and selecting available parking spaces. These integrated technologies enable the AMR to navigate complex parking lots, optimize space utilization, and provide seamless automated parking. The AMR autonomously detects free parking spaces, lifts the vehicle, and parks it with high precision, making the entire parking process autonomous and highly efficient. This project pushes the boundaries of autonomous vehicle technology, aiming to contribute significantly to smarter and more efficient urban mobility systems.
LIDAR-based autonomous mobile robots (AMRs) are gradually being used for gas detection in industries. They detect tiny changes in the composition of the environment in indoor areas that is too risky for humans, making it ideal for the detection of gases. This current work focusses on the basic aspect of gas detection and avoiding unwanted accidents in industrial sectors by using an AMR with LIDAR sensor capable of autonomous navigation and MQ2 a gas detection sensor for identifying the leakages including toxic and explosive gases, and can alert the necessary personnel in real-time by using simultaneous localization and mapping (SLAM) algorithm and gas distribution mapping (GDM). GDM in accordance with SLAM algorithm directs the robot towards the leakage point immediately thereby avoiding accidents. Raspberry Pi 4 is used for efficient data processing and hardware part accomplished with PGM45775 DC motor for movements with 2D LIDAR allowing 360° mapping. The adoption of LIDAR-based AMRs
Object detection (OD) is one of the most important aspects in Autonomous Driving (AD) application. This depends on the strategic sensor’s selection and placement of sensors around the vehicle. The sensors should be selected based on various constraints such as range, use-case, and cost limitation. This paper introduces a systematic approach for identifying the optimal practices for selecting sensors in AD object detection, offering guidance for those looking to expand their expertise in this field and select the most suitable sensors accordingly. In general, object detection typically involves utilizing RADAR, LiDAR, and cameras. RADAR excels in accurately measuring longitudinal distances over both long and short ranges, but its accuracy in lateral distances is limited. LiDAR is known for its ability to provide accurate range data, but it struggles to identify objects in various weather conditions. On the other hand, camera-based systems offer superior recognition capabilities but lack
Exactly when sensor fusion occurs in ADAS operations, late or early, impacts the entire system. Governments have been studying Advanced Driver Assistance Systems (ADAS) since at least the late 1980s. Europe's Generic Intelligent Driver Support initiative ran from 1989 to 1992 and aimed “to determine the requirements and design standards for a class of intelligent driver support systems which will conform with the information requirements and performance capabilities of the individual drivers.” Automakers have spent the past 30 years rolling out such systems to the buying public. Toyota and Mitsubishi started offering radar-based cruise control to Japanese drivers in the mid-1990s. Mercedes-Benz took the technology global with its Distronic adaptive cruise control in the 1998 S-Class. Cadillac followed that two years later with FLIR-based night vision on the 2000 Deville DTS. And in 2003, Toyota launched an automated parallel parking technology called Intelligent Parking Assist on the
In non-cooperative environments, unmanned aerial vehicles (UAVs) have to land without artificial markers, which is a key step towards achieving full autonomy. However, the existing vision-based schemes have the common problems of poor robustness and generalization, and the LiDAR-based schemes have the disadvantages of low resolution, high power consumption and high weight. In this paper, we propose an UAV landing system equipped with a binocular camera to preform 3D reconstruction and select the safe landing zone. The whole system only consists of a stereo camera, and the innovation of the solution is fusing the stereo matching algorithm and monocular depth estimation(MDE) model to get a robust prediction on the metric depth. The whole landing system consists of a stereo matching module, a monocular depth estimation (MDE) module, a depth fusion module, and a safe landing zone selection module. The stereo matching module uses Semi-Global Matching (SGM) algorithm to calculate the
With the rapid advancement in unmanned aerial vehicle (UAV) technology, the demand for stable and high-precision electro-optical (EO) pods, such as cameras, lidar sensors, and infrared imaging systems, has significantly increased. However, the inherent vibrations generated by the UAV’s propulsion system and aerodynamic disturbances pose significant challenges to the stability and accuracy of these payloads. To address this issue, this paper presents a study on the application of high-static low-dynamic stiffness (HSLDS) vibration isolation devices in EO payloads mounted on UAVs. The HSLDS system is designed to effectively isolate low-frequency and high-amplitude vibrations while maintaining high static stiffness, ensuring both stability during hovering and precise pointing capabilities. A nonlinear dynamic system model with two degrees of freedom is formulated for an EO pod supported by HSLDS isolators at both ends. The model’s natural frequencies are determined, and approximate
In September, after several months of evaluating the market, “Honda Xcelerator Ventures” — the automotive manufacturer’s startup investment subsidiary — made a major investment award to California-based silicon photonics startup SiLC Technologies, Inc., to develop next generation Frequency-Modulated Continuous Wave (FMCW) LiDAR for “all types of mobility.”
The advancements towards autonomous driving have propelled the need for reference/ground truth data for development and validation of various functionalities. Traditional data labelling methods are time consuming, skills intensive and have many drawbacks. These challenges are addressed through ALiVA (automatic lidar, image & video annotator), a semi-automated framework assisting for event detection and generation of reference data through annotation/labelling of video & point-cloud data. ALiVA is capable of processing large volumes of camera & lidar sensor data. Main pillars of framework are object detection-classification models, object tracking algorithms, cognitive algorithms and annotation results review functionality. Automatic object detection functionality creates a precise bounding box around the area of interest and assigns class labels to annotated objects. Object tracking algorithms tracks detected objects in video frames, provides a unique object id for each object and
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors.
Sensor calibration plays an important role in determining overall navigation accuracy of an autonomous vehicle (AV). Calibrating the AV’s perception sensors, typically, involves placing a prominent object in a region visible to the sensors and then taking measurements to further analyses. The analysis involves developing a mathematical model that relates the AV’s perception sensors using the measurements taken of the prominent object. The calibration process has multiple steps that require high precision, which tend to be tedious and time-consuming. Worse, calibration has to be repeated to determine new extrinsic parameters whenever either one of the sensors move. Extrinsic calibration approaches for LiDAR and camera depend on objects or landmarks with distinct features, like hard edges or large planar faces that are easy to identify in measurements. The current work proposes a method for extrinsically calibrating a LiDAR and a forward-facing monocular camera using 3D and 2D bounding
Lasers developed at the University of Rochester offer a new path for on-chip frequency comb generators. University of Rochester, Rochester, NY Light measurement devices called optical frequency combs have revolutionized metrology, spectroscopy, atomic clocks, and other applications. Yet challenges with developing frequency comb generators at a microchip scale have limited their use in everyday technologies such as handheld electronics. In a study published in Nature Communications, researchers at the University of Rochester describe new microcomb lasers they have developed that overcome previous limitations and feature a simple design that could open the door to a broad range of uses.
Simulation company rFpro has already mapped over 180 digital locations around the world, including public roads, proving grounds and race circuits. But the company's latest is by far its biggest and most complicated. Matt Daley, technical director at rFpro, announced at AutoSens USA 2024 that its new Los Angeles route is an “absolutely massive, complicated model” of a 36-km (22-mile) loop that can be virtually driven in both directions. Along these digital roads - which were built off survey-grade LIDAR data with a 1 cm by 1 cm (1.1-in by 1.1 in) X-Y grid - rFpro has added over 12,000 buildings, 13,000 pieces of street infrastructure (like signs and lamps), and 40,000 pieces of vegetation. “It's a fantastic location,” Daley said. “It's a huge array of different types of challenging infrastructure for AVs. You can drive this loop with full vehicle dynamic inputs, ready to excite the suspension and, especially with AVs, shake the sensors in the correct way as you would be getting if you
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out.
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
In pursuit of safety validation of automated driving functions, efforts are being made to accompany real world test drives by test drives in virtual environments. To be able to transfer highly automated driving functions into a simulation, models of the vehicle’s perception sensors such as lidar, radar and camera are required. In addition to the classic pulsed time-of-flight (ToF) lidars, the growing availability of commercial frequency modulated continuous wave (FMCW) lidars sparks interest in the field of environment perception. This is due to advanced capabilities such as directly measuring the target’s relative radial velocity based on the Doppler effect. In this work, an FMCW lidar sensor simulation model is introduced, which is divided into the components of signal propagation and signal processing. The signal propagation is modeled by a ray tracing approach simulating the interaction of light waves with the environment. For this purpose, an ASAM Open Simulation Interface (OSI
In the evolving landscape of automated driving systems, the critical role of vehicle localization within the autonomous driving stack is increasingly evident. Traditional reliance on Global Navigation Satellite Systems (GNSS) proves to be inadequate, especially in urban areas where signal obstruction and multipath effects degrade accuracy. Addressing this challenge, this paper details the enhancement of a localization system for autonomous public transport vehicles, focusing on mitigating GNSS errors through the integration of a LiDAR sensor. The approach involves creating a 3D map using the factor graph-based LIO-SAM algorithm, which is further enhanced through the integration of wheel encoder and altitude data. Based on the generated map a LiDAR localization algorithm is used to determine the pose of the vehicle. The FAST-LIO based localization algorithm is enhanced by integrating relative LiDAR Odometry estimates and by using a simple yet effective delay compensation method to
Autonomous Driving is used in various settings, including indoor areas such as industrial halls and warehouses. For perception in these environments, LIDAR is currently very popular due to its high accuracy compared to RADAR and its robustness to varying lighting conditions compared to cameras. However, there is a notable lack of freely available labeled LIDAR data in these settings, and most public datasets, such as KITTI and Waymo, focus on public road scenarios. As a result, specialized publicly available annotation frameworks are rare as well. This work tackles these shortcomings by developing an automated AI-based labeling tool to generate a LIDAR dataset with 3D ground truth annotations for industrial warehouse scenarios. The base pipeline for the annotation framework first upsamples the incoming 16-channel data into dense 64-channel data. The upsampled data is then manually annotated for the defined classes and this annotated 64-channel dataset is used to fine-tune the Part-A2
The global market for automotive LIDAR is expected to grow from $332 million in 2022 to more than $4.5 billion by 2028. That’s solid market growth, particularly given the decades-old challenges of commercializing LIDAR that would be affordable for automotive designs. We interviewed Eric Aguilar, co-founder and CEO of Omnitron Sensors, Los Angeles, CA, to learn about a new MEMS scanning mirror that could accelerate the market adoption of LIDAR.
Robots and autonomous vehicles can use 3D point clouds from LIDAR sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers from Japan have developed DPPFA–Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather.
Video of an event recorded from a moving camera contains information not only useful for reconstructing the locations and timing of an event, but also the velocity of the camera attached to the moving object or vehicle. Determining the velocity of a video camera recording from a moving vehicle is useful for determining the vehicle’s velocity and can be compared with speeds calculated through other reconstruction methods, or to data from vehicle speed monitoring devices. After tracking the video, the positions and speeds of other objects within the video can also be determined. Video tracking analysis traditionally has required a site inspection to map the three-dimensional environment. In instances where there have been significant site changes, where there is limited or no site access, and where budgeting and timing constraints exist, a three-dimensional environment can be created using publicly available aerial imagery and aerial LiDAR. This paper presents a methodology for creating
Cellular Vehicle-to-Everything (C-V2X) is considered an enabler for fully automated driving. It can provide the needed information about traffic situations and road users ahead of time compared to the onboard sensors which are limited to line-of-sight detections. This work presents the investigation of the effectiveness of utilizing the C-V2X technology for a valet parking collision mitigation feature. For this study a LiDAR was mounted at the FEV North America parking lot in a hidden intersection with a C-V2X roadside unit. This unit was used to process the LiDAR point cloud and transmit the information of the detected objects to an onboard C-V2X unit. The received data was provided as input to the path planning and controls algorithms so that the onboard controller can make the right decision while approaching the hidden intersection. FEV’s Smart Vehicle Demonstrator was utilized to test the C-V2X setup and the developed algorithms. Test results show that the vehicle was able to
LiDAR sensors play an important role in the perception stack of modern autonomous driving systems. Adverse weather conditions such as rain, fog and dust, as well as some (occasional) LiDAR hardware fault may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values. In this paper, we propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics. Specifically, we develop a pointcloud quality metric based on the LiDAR points’ spatial and intensity distribution to characterize the noise level of the pointcloud, which relies on pure mathematical analysis and does not require any labeling or training as learning-based methods do. Therefore, the method is scalable and can be quickly deployed either online to improve the autonomy safety by monitoring anomalies in the LiDAR data or offline to perform in-depth study of the LiDAR behavior over large amount of data
Ensuring the safety of vulnerable road users (VRUs) such as pedestrians, users of micro-mobility vehicles, and cyclists is imperative for the commercialization of automated vehicles (AVs) in urban traffic scenarios. City traffic intersections are of particular concern due to the precarious situations VRUs often encounter when navigating these locations, primarily because of the unpredictable nature of urban traffic. Earlier work from the Institute of Automated Vehicles (IAM) has developed and evaluated Driving Assessment (DA) metrics for analyzing car following scenarios. In this work, we extend those evaluations to an urban traffic intersection testbed located in downtown Tempe, Arizona. A multimodal infrastructure sensor setup, comprising a high-density, 128-channel LiDAR and a 720p RGB camera, was employed to collect data during the dusk period, with the objective of capturing data during the transition from daylight to night. In this study, we present and empirically assess the
Accurate and reliable localization in GNSS-denied environments is critical for autonomous driving. Nevertheless, LiDAR-based and camera-based methods are easily affected by adverse weather conditions such as rain, snow, and fog. The 4D Radar with all-weather performance and high resolution has attracted more interest. Currently, there are few localization algorithms based on 4D Radar, so there is an urgent need to develop reliable and accurate positioning solutions. This paper introduces RIO-Vehicle, a novel tightly coupled 4D Radar/IMU/vehicle dynamics within the factor graph framework. RIO-Vehicle aims to achieve reliable and accurate vehicle state estimation, encompassing position, velocity, and attitude. To enhance the accuracy of relative constraints, we introduce a new integrated IMU/Dynamics pre-integration model that combines a 2D vehicle dynamics model with a 3D kinematics model. Then, we employ a dynamic object removal process to filter out dynamic points from a single 4D
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software have begun to include these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software Blender to determine its accuracy for use in accident reconstruction. A parking lot was scanned using Faro LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment and photographs were taken at various times throughout the day from the same location in the environment. This environment was then 3D modeled in Blender based on the point cloud, and the sun system
This paper addresses the issues of long-term signal loss in localization and cumulative drift in SLAM-based online mapping and localization in autonomous valet parking scenarios. A GPS, INS, and SLAM fusion localization framework is proposed, enabling centimeter-level localization with wide scene adaptability at multiple scales. The framework leverages the coupling of LiDAR and Inertial Measurement Unit (IMU) to create a point cloud map within the parking environment. The IMU pre-integration information is used to provide rough pose estimation for point cloud frames, and distortion correction, line and plane feature extraction are performed for pose estimation. The map is optimized and aligned with a global coordinate system during the mapping process, while a visual Bag-of-Words model is built to remove dynamic features. The fusion of prior map knowledge and various sensors is employed for in-scene localization, where a GPS-fusion Bag-of-Words model is used for vehicle pose
This article presents a novel approach to optimize the placement of light detection and ranging (LiDAR) sensors in autonomous driving vehicles using machine learning. As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception. The proposed method employs the deep deterministic policy gradient (DDPG) algorithm, which takes the vehicle’s surface geometry as input and generates optimized 3D sensor positions with predicted high visibility. Through extensive experiments on various vehicle shapes and a rectangular cuboid, the effectiveness and adaptability of the proposed method are demonstrated. Importantly, the trained network can efficiently evaluate new vehicle shapes without the need for re-optimization, representing a significant improvement over classical methods such as genetic algorithms. By leveraging machine learning techniques, this research streamlines the sensor placement optimization process
LiDAR stands for Light Detection and Ranging. It works on the principle of reflection of light. LiDAR is one among the other sensors like RADAR and Camera to help achieve a higher level (Level 3 & above) of Autonomous driving capabilities. LiDAR, as a sensor, is used to perceive the environment in 3D by calculating the ‘Time of flight’ of the Laser beam transmitted from LiDAR and the rays reflected from the Object, along with the intensity of reflection from the object. The frame of perception is plotted as a point cloud. LiDAR is integrated in front of the vehicle, precisely in the grill of the car having a high vantage point to perceive the environment to extract the best possible sensor performance. LiDAR sensor needs to be held within the front panel cutout with uniform gap and flush condition. However, due to tolerance reasons it may have following issues: Sensor functional degradation will happen if it is not aligned properly at the center to the cutout, because the view cones
The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we
In the rapidly evolving era of software and autonomous driving systems, there is a pressing demand for extensive validation and accelerated development. This necessity arises from the need for copious amounts of data to effectively develop and train neural network algorithms, especially for autonomous vehicles equipped with sensor suites encompassing various specialized algorithms, such as object detection, classification, and tracking. To construct a robust system, sensor data fusion plays a vital role. One approach to ensure an ample supply of data is to simulate the physical behavior of sensors within a simulation framework. This methodology guarantees redundancy, robustness, and safety by fusing the raw data from each sensor in the suite, including images, polygons, and point clouds, either on a per-sensor level or on an object level. Creating a physical simulation for a sensor is an extensive and intricate task that demands substantial computational power. Alternatively, another
Positioning system is a key module of autonomous driving. As for LiDAR SLAM system, it faces great challenges in scenarios where there are repetitive and sparse features. Without loop closure or measurements from other sensors, odometry match errors or accumulated errors cannot be corrected. This paper proposes a construction method of LiDAR anchor constraints to improve the robustness of the SLAM system in the above challenging environment. We propose a robust anchor extraction method that adaptively extracts suitable cylindrical anchors in the environment, such as tree trunks, light poles, etc. Skewed tree trunks are detected by feature differences between laser lines. Boundary points on cylinders are removed to avoid misleading. After the appropriate anchors are detected, a factor graph-based anchor constraint construction method is designed. Where direct scans are made to anchor, direct constraints are constructed. While in the position where the anchor is not directly observed
LiDAR and camera fusion have emerged as a promising approach for improving place recognition in robotics and autonomous vehicles. However, most existing approaches often treat sensors separately, overlooking the potential benefits of correlation between them. In this paper, we propose a Cross- Modality Module (CMM) to leverage the potential correlation of LiDAR and camera features for place recognition. Besides, to fully exploit potential of each modality, we propose a Local-Global Fusion Module to supplement global coarse-grained features with local fine-grained features. The experiment results on public datasets demonstrate that our approach effectively improves the average recall by 2.3%, reaching 98.7%, compared with simply stacking of LiDAR and camera.
In this paper, we introduce one imu radar loosely coupled SLAM method based on our 4D millimeter-wave image radar which it outputs pointcloud containing xyz position information and power information in our autonomous vehicles. at common pointcloud-based slam such as lidar slam usually adopt imu-lidar tightly coupled structure, which slam front end outputs odometry reversly affect imu preintegration. slam system badness occurs when front end odometry drift bigger and bigger or one frame pointcloud match failed. so in our method, we decouple imu and radar odometry crossed relationship, fusing imu and wheel odometry to generate one rough pose trajectory as initial guess value for front end registration, not directly from radar estimated odometry pose, that is to say, front end registration is independent of imu preintegration. besides, we empirically propose one idea juding front end registration result to identify match-less environment and adopt relative wheel odometry pose instead of
Items per page:
1 – 50 of 420