Browse Topic: Lidar

Items (398)
Object detection (OD) is one of the most important aspects in Autonomous Driving (AD) application. This depends on the strategic sensor’s selection and placement of sensors around the vehicle. The sensors should be selected based on various constraints such as range, use-case, and cost limitation. This paper introduces a systematic approach for identifying the optimal practices for selecting sensors in AD object detection, offering guidance for those looking to expand their expertise in this field and select the most suitable sensors accordingly. In general, object detection typically involves utilizing RADAR, LiDAR, and cameras. RADAR excels in accurately measuring longitudinal distances over both long and short ranges, but its accuracy in lateral distances is limited. LiDAR is known for its ability to provide accurate range data, but it struggles to identify objects in various weather conditions. On the other hand, camera-based systems offer superior recognition capabilities but lack
Maktedar, AsrarulhaqChatterjee, Mayurika
ABSTRACT Autonomous driving is emerging as the future of transportation recently. For autonomous driving to be safe and reliable the perception sensors need sufficient vision in sometimes challenging operating conditions including dust, dirt, and moisture or during inclement weather. LiDAR perception sensors used in certain autonomous driving solutions require both a clean and dry sensor screen to effectively operate in a safe manner. In this paper, UV durable Hydrophobic (UVH) coatings were developed to improve LiDAR sensing performance. A lab testbed was successfully constructed to evaluate UVH coatings and uncoated control samples for LiDAR sensor under the simulated weathering conditions, including fog, rain, mud, and bug. In addition, a mobile testbed was developed in partnership with North Dakota State University (NDSU) to evaluate the UVH coatings in an autonomous moving vehicle under different weathering conditions. These UV-durable easy-to-clean coatings with high optical
Zhao, YuejunHellerman, Edward A.Lu, SongweiSelekwa, Majura
ABSTRACT Localization refers to the process of estimating ones location (and often orientation) within an environment. Ground vehicle automation, which offers the potential for substantial safety and logistical benefits, requires accurate, robust localization. Current localization solutions, including GPS/INS, LIDAR, and image registration, are all inherently limited in adverse conditions. This paper presents a method of localization that is robust to most conditions that hinder existing techniques. MIT Lincoln Laboratory has developed a new class of ground penetrating radar (GPR) with a novel antenna array design that allows mapping of the subsurface domain for the purpose of localization. A vehicle driving through the mapped area uses a novel real-time correlation-based registration algorithm to estimate the location and orientation of the vehicle with respect to the subsurface map. A demonstration system has achieved localization accuracy of 2 cm. We also discuss tracking results
Stanley, ByronCornick, MatthewKoechling, Jeffrey
ABSTRACT Autonomous vehicle perception has been widely explored using camera images but is limited with respect to LiDAR point cloud processing. Furthermore, focus is primarily on well-regulated environments, obviating a need for an algorithm that can contextualize dynamic and complex conditions through 3D point cloud representation. In this report, an Echo State Network for LiDAR signal processing is introduced and evaluated for its ability to perform semantic segmentation on unregulated terrains, using the RELLIS-3D open-source dataset. The L-ESN contains 16 parallel reservoirs with point cloud processing time of 1.9 seconds and 83.1% classification rate of 4 classes defining terrain trafficability, with no prior feature extraction or normalization, and a training time of 31 minutes. A 2D cost map is generated from the segmented point cloud for integration as a perception node plug-in to system-level navigation architectures. Citation: S. Gardner, M. R. Haider, P. Fiorini, S. Misko
Gardner, S.Haider, M. R.Fiorini, P.Misko, S.Smereka, J.Jayakumar, P.Gorsich, D.Moradi, L.Vantsevich, V.
ABSTRACT This paper presents a new terrain traversability mapping method integrated into the Robotic Technology Kernel (RTK) that produces ground slope traversability cost information from LiDAR height maps. These ground slope maps are robust to a variety of off-road scenarios including areas of sparse or dense vegetation. A few simple and computationally efficient heuristics are applied to the ground slope maps to produce cost data that can be directly consumed by existing path planners in RTK, improving the navigation performance in the presence of steep terrain. Citation: J. Ramsey, R. Brothers, J. Hernandez, “Creation of a Ground Slope Mapping Methodology Within the Robotic Technology Kernel for Improved Navigation Performance,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 16-18, 2022
Ramsey, JacksonBrothers, RobertHernandez, Joseph
ABSTRACT Off-road autonomy development is increasingly leveraging simulation for its ability to rapidly test and train new algorithms as well as simulate a wide variety of terrains and environmental conditions. Unstructured off-road environments require modeling complex environmental phenomena, such as LIDAR responses from vegetation. Neya has developed an approach to characterize the variability of measurements of vegetation and approximate the variability of vegetation measurements using that characterization. This method adds a small overhead to existing LIDAR models, works with many types of LIDAR sensor models, and simply requires objects to be tagged in the environment as vegetation for the sensor models to respond appropriately. Citation: R. Mattes, J. Pace, “Fast LIDAR Vegetation Response Modeling in Simulation”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 10-12, 2021
Mattes, RichPace, James
ABSTRACT A Model Predictive Control (MPC) LIDAR-based constant speed local obstacle avoidance algorithm has been implemented on rigid terrain and granular terrain in Chrono to examine the robustness of this control method. Provided LIDAR data as well as a target location, a vehicle can route itself around obstacles as it encounters them and arrive at an end goal via an optimal route. Using Chrono, a multibody physics API, this controller has been tested on a complex multibody physics HMMWV model representing the plant in this study. A penalty-based DEM approach is used to model contacts on both rigid ground and granular terrain. We draw conclusions regarding the MPC algorithm performance based on its ability to navigate the Chrono HMMWV on rigid and granular terrain
Haraus, NicholasSerban, RaduFleischmann, Jonathan
ABSTRACT Simulation is a critical step in the development of autonomous systems. This paper outlines the development and use of a dynamically linked library for the Mississippi State University Autonomous Vehicle Simulator (MAVS). The MAVS is a library of simulation tools designed to allow for real-time, high performance, ray traced simulation capabilities for off-road autonomous vehicles. It includes features such as automated off-road terrain generation, automatic data labeling for camera and LIDAR, and swappable vehicle dynamics models. Many machine learning tools today leverage Python for development. To use these tools and provide an easy to use interface, Python bindings were developed for the MAVS. The need for these bindings and their implementation is described. Citation: C. Hudson, C. Goodin, Z. Miller, W. Wheeler, D. Carruth, “Mississippi State University Autonomous Vehicle Simulation Library”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium
Hudson, Christopher R.Goodin, ChristopherMiller, ZachWheeler, WarrenCarruth, Daniel W.
ABSTRACT Self-driving or autonomous vehicles consist of software and hardware subsystems that perform tasks like sensing, perception, path-planning, vehicle control, and actuation. An error in one of these subsystems may manifest itself in any subsystem to which it is connected. Errors in sensor data propagate through the entire software pipeline from perception to path planning to vehicle control. However, while a small number of previous studies have focused on the propagation of errors in pose estimation or image processing, there has been little prior work on systematic evaluation of the propagation of errors through the entire autonomous architecture. In this work, we present a simulation study of error propagation through an autonomous system and work toward developing appropriate metrics for quantifying the error at both the subsystem and system levels. Finally, we demonstrate how the framework for analyzing error propagation can be applied to analysis of an autonomous systems
Carruth, Daniel W.Goodin, ChristopherDabbiru, LalithaScherer, NicklausJayakumar, Paramsothy
ABSTRACT A Non-linear Model Predictive Controller (NMPC) was developed for an unmanned ground vehicle (UGV). The NMPC uses a particle swarm pattern search algorithm to optimize the control input, which contains a desired steer angle and a desired longitudinal velocity. The NMPC is designed to approach a target whilst avoiding obstacles that are detected using a light detection and ranging sensor (lidar). Since not all obstacles are stationary, an obstacle tracking algorithm is employed to track obstacles. Two point cluster detection algorithms were reviewed, and a constant velocity Kalman filter-based tracking loop was developed. The tracked obstacles’ positions are predicted using a constant velocity model in the NMPC; this allows for avoidance of both stationary and dynamic obstacles
Stamenov, VelislavGeiger, StephenBevly, DavidBalas, Cristian
ABSTRACT Accurate terrain mapping is of paramount importance for motion planning and safe navigation in unstructured terrain. LIDAR sensors provide a modality, in the form of a 3D point cloud, that can be used to estimate the elevation map of the surrounding environment. But, working with the 3D point cloud data turns out to be challenging. This is primarily due to the unstructured nature of the point clouds, relative sparsity of the data points, occlusions due to negative slopes and obstacles, and the high computational burden of traditional point cloud algorithms. We tackle these problems with the help of a learning-based, efficient data processing approach for vehicle-centric terrain reconstruction using a 3D LIDAR. The 3D LIDAR point cloud is projected on the ground plane, which is processed by a generative adversarial network (GAN) architecture in the form of an image to fill in the missing parts of the terrain heightmap. We train the GAN model on artificially generated datasets
Sutavani, SarangZheng, AndrewJoglekar, AjinkyaSmereka, JonathonGorsich, DavidKrovi, VenkatVaidya, Umesh
ABSTRACT Off-road autonomous navigation poses a challenging problem, as the surrounding terrain is usually unknown, the support surface the vehicle must traverse cannot be considered flat, and environmental features (such as vegetation and water) make it difficult to estimate the support surface elevation. This paper will focus on Robotic Research’s suite of off-road autonomous planning and obstacle avoidance tools. Specifically, this paper will provide an overview of our terrain detection system, which utilizes advanced LADAR processing techniques to provide an estimate of the surface. Additionally, it will describe the kino-dynamic off-road planner which can, in real-time, calculate the optimal route, taking into account the support surface, obstacles sensed in the environment, and more. Finally, the paper will explore how these technologies have been applied to a wide variety of different robotic applications
Lacaze, AlbertoMottern, EdwardBrilhart, Bryan
ABSTRACT The complex future battlefield will require the ability for quick identification of threats in chaotic environments followed by decisive and accurate threat mitigation by lethal force or countermeasure. Integration and synchronization of high bandwidth sensor capabilities into military vehicles is essential to identifying and mitigating the full range of threats. High bandwidth sensors including Radar, Lidar, and electro-optical sensors provide real-time information for active protection systems, advanced lethality capabilities, situational understanding and automation. The raw sensor data from Radar systems can exceed 10 gigabytes per second and high definition video is currently at 4 gigabytes per second with increased resolution standards emerging. The processing and memory management of the real time sensor data assimilated with terrain maps and external communication information requires a high performance electronic architecture with integrated data management. GDLS has
Silveri, Andrew
ABSTRACT For safe navigation through an environment, autonomous ground vehicles rely on sensory inputs such as cameras, LiDAR, and radar for detection and classification of obstacles and impassable terrain. These sensors provide data representing 3D space surrounding the vehicle. Often this data is obscured by dust, precipitation, objects, or terrain, producing gaps in the sensor field of view. These gaps, or occlusions, can indicate the presence of obstacles, negative obstacles, or rough terrain. Because sensors receive no data in these occlusions, sensor data provides no explicit information about what might be found in the occluded areas. To provide the navigation system with a more complete model of the environment, information about the occlusions must be inferred from sensor data. In this paper we show a probabilistic method for mapping point cloud occlusions in real-time and how knowledge of these occlusions can be integrated into an autonomous vehicle obstacle detection and
Bybee, Taylor C.Ferrin, Jeffrey L.
ABSTRACT Robotics makers and application engineers stand to benefit from replacing physical simulation with a digital simulation that can easily represent any number of robots on a terrain and provide ground truth data for comparison with sensor data during analysis. In this research, a digital proxy simulation (DPS) was developed to dynamically simulate any number of articulated robots in real-time using sophisticated robot-environment interaction models. 3D models of the robot and environment objects can be imported or placed conveniently. Parameters of the models can be fine-tuned to mimic the environment with high fidelity. Sensor simulation and control capabilities of the DPS are also highlighted. Common sensors can be simulated including lidar, image sensors, and stereo cameras. Control plugins can be added easily to accomplish complex tasks
Chen, XiBarker, Douglas E.Bacon, James A.English, James D.
ABSTRACT Cold regions are becoming increasingly more important for off-road vehicle mobility, including autonomous navigation. Most of the time, these regions are covered by snow, and vehicles are forced to operate under active snowfall conditions. In such scenarios, realistic and effective models to predict performance of on-board sensors during snowfalls become of paramount importance. This paper describes a stochastic approach for two-dimensional numerical simulation of dynamic snow scenes that eventually will be used for driving condition visualization and vehicle sensor performance predictions. The model captures realistic snow particle size distribution, terminal near-surface particle speeds, and adequately describes interactions with wind. Citation: S. N. Vecherin, M. E. Tedesche, M. W. Parker, “Dynamic Snowfall Scene Simulations for Autonomous Vehicle Sensor Performance”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI
Vecherin, Sergey N.Tedesche, Molly E.Parker, Michael W.
In non-cooperative environments, unmanned aerial vehicles (UAVs) have to land without artificial markers, which is a key step towards achieving full autonomy. However, the existing vision-based schemes have the common problems of poor robustness and generalization, and the LiDAR-based schemes have the disadvantages of low resolution, high power consumption and high weight. In this paper, we propose an UAV landing system equipped with a binocular camera to preform 3D reconstruction and select the safe landing zone. The whole system only consists of a stereo camera, and the innovation of the solution is fusing the stereo matching algorithm and monocular depth estimation(MDE) model to get a robust prediction on the metric depth. The whole landing system consists of a stereo matching module, a monocular depth estimation (MDE) module, a depth fusion module, and a safe landing zone selection module. The stereo matching module uses Semi-Global Matching (SGM) algorithm to calculate the
Zhou, YiBiaoZhang, BiHui
With the rapid advancement in unmanned aerial vehicle (UAV) technology, the demand for stable and high-precision electro-optical (EO) pods, such as cameras, lidar sensors, and infrared imaging systems, has significantly increased. However, the inherent vibrations generated by the UAV’s propulsion system and aerodynamic disturbances pose significant challenges to the stability and accuracy of these payloads. To address this issue, this paper presents a study on the application of high-static low-dynamic stiffness (HSLDS) vibration isolation devices in EO payloads mounted on UAVs. The HSLDS system is designed to effectively isolate low-frequency and high-amplitude vibrations while maintaining high static stiffness, ensuring both stability during hovering and precise pointing capabilities. A nonlinear dynamic system model with two degrees of freedom is formulated for an EO pod supported by HSLDS isolators at both ends. The model’s natural frequencies are determined, and approximate
Tian, YishenGuo, GaofengWang, GuangzhaoWei, WanBao, LingcongDong, GuanLi, Liujie
The advancements towards autonomous driving have propelled the need for reference/ground truth data for development and validation of various functionalities. Traditional data labelling methods are time consuming, skills intensive and have many drawbacks. These challenges are addressed through ALiVA (automatic lidar, image & video annotator), a semi-automated framework assisting for event detection and generation of reference data through annotation/labelling of video & point-cloud data. ALiVA is capable of processing large volumes of camera & lidar sensor data. Main pillars of framework are object detection-classification models, object tracking algorithms, cognitive algorithms and annotation results review functionality. Automatic object detection functionality creates a precise bounding box around the area of interest and assigns class labels to annotated objects. Object tracking algorithms tracks detected objects in video frames, provides a unique object id for each object and
Mardhekar, AmoghPawar, RushikeshMohod, RuchaShirudkar, RohitHivarkar, Umesh N.
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors
On the modern battlefield, Global Positioning System (GPS) can be unreliable in contested environments due to jamming or signal loss. Existing alternatives like high-accuracy inertial navigation systems (INS) are prone to higher drifts over distances with platform dependent performance, while solutions like Visual-Inertial Odometry (VIO) and Lidar-Inertial Odometry (LIO) lack accuracy and robustness on challenging terrain. We present a novel Positioning, Navigation, and Timing (PNT) solution that overcomes these limitations. It integrates inertial measurements from an IMU and doppler measurements from a Frequency Modulated Continuous Wave (FMCW) LiDAR within a non-linear filtering framework to robustly measure motion states that achieve up to 0.1% Cross Track Error (CTE) rate in real-time. Furthermore, unlike solutions reliant on wheel speed sensors, our method requires no platform-specific information and is resilient to environmental factors such as wheel-slip, dynamic lighting, and
Templeton, JeremyGill, Jasprit SinghJakhotia, AnuragPazhayampallil, Joel
3
Goodin, ChrisCarruth, Daniel W.Dabbiru, LalithaHedrick, MichaelBlack, BrandonAspin, ZacharyCarrillo, Justin T.Kaniarz, John
Autonomous vehicle navigation requires signal processing of the vehicle’s sensors to provide meaningful information to the planners such that challenging artifacts like shadows, rare events, obstructive vegetation, etc. are identified properly, avoiding ill-informed navigation. Using a single algorithm such as semantic segmentation of camera images is often not enough to identify those challenging features but can be overcome by processing more than one type of sensor and fusing their results. In this work, semantic segmentation of camera image and LiDAR point cloud signals is performed using Echo State Networks to overcome the challenge of shadows identified as obstructions in off-road terrains. The coordination of algorithms processing multiple sensor signals is shown to avoid unnecessary road obstructions caused by high-contrast shadows for more informed navigational planning
Gardner, S. D.Hoxie, D.Bowen, N.Misko, S.Haider, M. R.Smereka, J.Jayakumar, P.Vantsevich, V.
Autonomous navigation in off-road terrain requires a perception system that can distinguish between vegetation that can easily be overridden and vegetation that cannot. While many autonomous systems struggle to estimate the navigability of vegetation like sparse grass or small shrubs, in this work we use a new vehicle-embedded force sensor to directly measure override forces as the vehicle drives through vegetation, allowing the perception system to learn the navigability of vegetation based on the corresponding sensor signatures. The override force can be estimated using a neural network trained on a combination of lidar and images, and the resulting force prediction can be used as an input into both local and global path-planning algorithms for autonomous navigation. In this work, we show the results for our force measurements and outline the process for extracting training data to predict override force using RESNET-50
Goodin, ChrisMoore, MarcSalmon, EthanCole, MikeJayakumar, ParamsothyEnglish, Brittney
Sensor calibration plays an important role in determining overall navigation accuracy of an autonomous vehicle (AV). Calibrating the AV’s perception sensors, typically, involves placing a prominent object in a region visible to the sensors and then taking measurements to further analyses. The analysis involves developing a mathematical model that relates the AV’s perception sensors using the measurements taken of the prominent object. The calibration process has multiple steps that require high precision, which tend to be tedious and time-consuming. Worse, calibration has to be repeated to determine new extrinsic parameters whenever either one of the sensors move. Extrinsic calibration approaches for LiDAR and camera depend on objects or landmarks with distinct features, like hard edges or large planar faces that are easy to identify in measurements. The current work proposes a method for extrinsically calibrating a LiDAR and a forward-facing monocular camera using 3D and 2D bounding
Omwansa, MarkSharma, SachinMeyer, RichardBrown, Nicholas
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
Dinkel, John
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out
Blanco, Sebastian
Simulation company rFpro has already mapped over 180 digital locations around the world, including public roads, proving grounds and race circuits. But the company's latest is by far its biggest and most complicated. Matt Daley, technical director at rFpro, announced at AutoSens USA 2024 that its new Los Angeles route is an “absolutely massive, complicated model” of a 36-km (22-mile) loop that can be virtually driven in both directions. Along these digital roads - which were built off survey-grade LIDAR data with a 1 cm by 1 cm (1.1-in by 1.1 in) X-Y grid - rFpro has added over 12,000 buildings, 13,000 pieces of street infrastructure (like signs and lamps), and 40,000 pieces of vegetation. “It's a fantastic location,” Daley said. “It's a huge array of different types of challenging infrastructure for AVs. You can drive this loop with full vehicle dynamic inputs, ready to excite the suspension and, especially with AVs, shake the sensors in the correct way as you would be getting if you
Blanco, Sebastian
Autonomous Driving is used in various settings, including indoor areas such as industrial halls and warehouses. For perception in these environments, LIDAR is currently very popular due to its high accuracy compared to RADAR and its robustness to varying lighting conditions compared to cameras. However, there is a notable lack of freely available labeled LIDAR data in these settings, and most public datasets, such as KITTI and Waymo, focus on public road scenarios. As a result, specialized publicly available annotation frameworks are rare as well. This work tackles these shortcomings by developing an automated AI-based labeling tool to generate a LIDAR dataset with 3D ground truth annotations for industrial warehouse scenarios. The base pipeline for the annotation framework first upsamples the incoming 16-channel data into dense 64-channel data. The upsampled data is then manually annotated for the defined classes and this annotated 64-channel dataset is used to fine-tune the Part-A2
Abdelhalim, GinaSimon, KevinBensch, RobertParimi, SaiQureshi, Bilal Ahmed
In the evolving landscape of automated driving systems, the critical role of vehicle localization within the autonomous driving stack is increasingly evident. Traditional reliance on Global Navigation Satellite Systems (GNSS) proves to be inadequate, especially in urban areas where signal obstruction and multipath effects degrade accuracy. Addressing this challenge, this paper details the enhancement of a localization system for autonomous public transport vehicles, focusing on mitigating GNSS errors through the integration of a LiDAR sensor. The approach involves creating a 3D map using the factor graph-based LIO-SAM algorithm, which is further enhanced through the integration of wheel encoder and altitude data. Based on the generated map a LiDAR localization algorithm is used to determine the pose of the vehicle. The FAST-LIO based localization algorithm is enhanced by integrating relative LiDAR Odometry estimates and by using a simple yet effective delay compensation method to
Kramer, MarkusBeierlein, Georg
In pursuit of safety validation of automated driving functions, efforts are being made to accompany real world test drives by test drives in virtual environments. To be able to transfer highly automated driving functions into a simulation, models of the vehicle’s perception sensors such as lidar, radar and camera are required. In addition to the classic pulsed time-of-flight (ToF) lidars, the growing availability of commercial frequency modulated continuous wave (FMCW) lidars sparks interest in the field of environment perception. This is due to advanced capabilities such as directly measuring the target’s relative radial velocity based on the Doppler effect. In this work, an FMCW lidar sensor simulation model is introduced, which is divided into the components of signal propagation and signal processing. The signal propagation is modeled by a ray tracing approach simulating the interaction of light waves with the environment. For this purpose, an ASAM Open Simulation Interface (OSI
Hofrichter, KristofLinnhoff, ClemensElster, LukasPeters, Steven
Robots and autonomous vehicles can use 3D point clouds from LIDAR sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers from Japan have developed DPPFA–Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather
The global market for automotive LIDAR is expected to grow from $332 million in 2022 to more than $4.5 billion by 2028. That’s solid market growth, particularly given the decades-old challenges of commercializing LIDAR that would be affordable for automotive designs. We interviewed Eric Aguilar, co-founder and CEO of Omnitron Sensors, Los Angeles, CA, to learn about a new MEMS scanning mirror that could accelerate the market adoption of LIDAR
Accurate and reliable localization in GNSS-denied environments is critical for autonomous driving. Nevertheless, LiDAR-based and camera-based methods are easily affected by adverse weather conditions such as rain, snow, and fog. The 4D Radar with all-weather performance and high resolution has attracted more interest. Currently, there are few localization algorithms based on 4D Radar, so there is an urgent need to develop reliable and accurate positioning solutions. This paper introduces RIO-Vehicle, a novel tightly coupled 4D Radar/IMU/vehicle dynamics within the factor graph framework. RIO-Vehicle aims to achieve reliable and accurate vehicle state estimation, encompassing position, velocity, and attitude. To enhance the accuracy of relative constraints, we introduce a new integrated IMU/Dynamics pre-integration model that combines a 2D vehicle dynamics model with a 3D kinematics model. Then, we employ a dynamic object removal process to filter out dynamic points from a single 4D
Zhu, JiaqiZhuo, GuirongXiong, Luzihang, heLeng, Bo
This paper addresses the issues of long-term signal loss in localization and cumulative drift in SLAM-based online mapping and localization in autonomous valet parking scenarios. A GPS, INS, and SLAM fusion localization framework is proposed, enabling centimeter-level localization with wide scene adaptability at multiple scales. The framework leverages the coupling of LiDAR and Inertial Measurement Unit (IMU) to create a point cloud map within the parking environment. The IMU pre-integration information is used to provide rough pose estimation for point cloud frames, and distortion correction, line and plane feature extraction are performed for pose estimation. The map is optimized and aligned with a global coordinate system during the mapping process, while a visual Bag-of-Words model is built to remove dynamic features. The fusion of prior map knowledge and various sensors is employed for in-scene localization, where a GPS-fusion Bag-of-Words model is used for vehicle pose
Chen, GuoyingWang, ZiangGao, ZhengYao, JunWang, Xinyu
LiDAR sensors play an important role in the perception stack of modern autonomous driving systems. Adverse weather conditions such as rain, fog and dust, as well as some (occasional) LiDAR hardware fault may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values. In this paper, we propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics. Specifically, we develop a pointcloud quality metric based on the LiDAR points’ spatial and intensity distribution to characterize the noise level of the pointcloud, which relies on pure mathematical analysis and does not require any labeling or training as learning-based methods do. Therefore, the method is scalable and can be quickly deployed either online to improve the autonomy safety by monitoring anomalies in the LiDAR data or offline to perform in-depth study of the LiDAR behavior over large amount of data
Zhang, ChiyuHan, JiZou, YaoDong, KexinLi, YujiaDing, JunchunHan, Xiaoling
Cellular Vehicle-to-Everything (C-V2X) is considered an enabler for fully automated driving. It can provide the needed information about traffic situations and road users ahead of time compared to the onboard sensors which are limited to line-of-sight detections. This work presents the investigation of the effectiveness of utilizing the C-V2X technology for a valet parking collision mitigation feature. For this study a LiDAR was mounted at the FEV North America parking lot in a hidden intersection with a C-V2X roadside unit. This unit was used to process the LiDAR point cloud and transmit the information of the detected objects to an onboard C-V2X unit. The received data was provided as input to the path planning and controls algorithms so that the onboard controller can make the right decision while approaching the hidden intersection. FEV’s Smart Vehicle Demonstrator was utilized to test the C-V2X setup and the developed algorithms. Test results show that the vehicle was able to
Alzu'bi, HamzehAlrousan, QusayObando, DavidRodriguez Zarazua, PedroTasky, Tom
Shadow positions can be useful in determining the time of day that a photograph was taken and determining the position, size, and orientation of an object casting a shadow in a scene. Astronomical equations can predict the location of the sun relative to the earth, and therefore the position of shadows cast by objects, based on the location’s latitude and longitude as well as the date and time. 3D computer software have begun to include these calculations as a part of their built-in sun systems. In this paper, the authors examine the sun system in the 3D modeling software Blender to determine its accuracy for use in accident reconstruction. A parking lot was scanned using Faro LiDAR scanner to create a point cloud of the environment. A camera was then set up on a tripod at the environment and photographs were taken at various times throughout the day from the same location in the environment. This environment was then 3D modeled in Blender based on the point cloud, and the sun system
Barreiro, EvanCarter, NealHashemian, Alireza
Ensuring the safety of vulnerable road users (VRUs) such as pedestrians, users of micro-mobility vehicles, and cyclists is imperative for the commercialization of automated vehicles (AVs) in urban traffic scenarios. City traffic intersections are of particular concern due to the precarious situations VRUs often encounter when navigating these locations, primarily because of the unpredictable nature of urban traffic. Earlier work from the Institute of Automated Vehicles (IAM) has developed and evaluated Driving Assessment (DA) metrics for analyzing car following scenarios. In this work, we extend those evaluations to an urban traffic intersection testbed located in downtown Tempe, Arizona. A multimodal infrastructure sensor setup, comprising a high-density, 128-channel LiDAR and a 720p RGB camera, was employed to collect data during the dusk period, with the objective of capturing data during the transition from daylight to night. In this study, we present and empirically assess the
Rath, Prabin KumarHarrison, BlakeLu, DuoYang, YezhouWishart, JeffreyYu, Hongbin
This article presents a novel approach to optimize the placement of light detection and ranging (LiDAR) sensors in autonomous driving vehicles using machine learning. As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception. The proposed method employs the deep deterministic policy gradient (DDPG) algorithm, which takes the vehicle’s surface geometry as input and generates optimized 3D sensor positions with predicted high visibility. Through extensive experiments on various vehicle shapes and a rectangular cuboid, the effectiveness and adaptability of the proposed method are demonstrated. Importantly, the trained network can efficiently evaluate new vehicle shapes without the need for re-optimization, representing a significant improvement over classical methods such as genetic algorithms. By leveraging machine learning techniques, this research streamlines the sensor placement optimization process
Berens, FelixAmbs, JordanElser, StefanReischl, Markus
LiDAR stands for Light Detection and Ranging. It works on the principle of reflection of light. LiDAR is one among the other sensors like RADAR and Camera to help achieve a higher level (Level 3 & above) of Autonomous driving capabilities. LiDAR, as a sensor, is used to perceive the environment in 3D by calculating the ‘Time of flight’ of the Laser beam transmitted from LiDAR and the rays reflected from the Object, along with the intensity of reflection from the object. The frame of perception is plotted as a point cloud. LiDAR is integrated in front of the vehicle, precisely in the grill of the car having a high vantage point to perceive the environment to extract the best possible sensor performance. LiDAR sensor needs to be held within the front panel cutout with uniform gap and flush condition. However, due to tolerance reasons it may have following issues: Sensor functional degradation will happen if it is not aligned properly at the center to the cutout, because the view cones
Pratap, AmitRangarej, Sanjeev
The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we
Fan, LiliZeng, ChangxianLi, YunjieWang, XuCao, Dongpu
In the rapidly evolving era of software and autonomous driving systems, there is a pressing demand for extensive validation and accelerated development. This necessity arises from the need for copious amounts of data to effectively develop and train neural network algorithms, especially for autonomous vehicles equipped with sensor suites encompassing various specialized algorithms, such as object detection, classification, and tracking. To construct a robust system, sensor data fusion plays a vital role. One approach to ensure an ample supply of data is to simulate the physical behavior of sensors within a simulation framework. This methodology guarantees redundancy, robustness, and safety by fusing the raw data from each sensor in the suite, including images, polygons, and point clouds, either on a per-sensor level or on an object level. Creating a physical simulation for a sensor is an extensive and intricate task that demands substantial computational power. Alternatively, another
Yousif, Ahmed Luay YousifElsobky, Mohamed
LiDAR and camera fusion have emerged as a promising approach for improving place recognition in robotics and autonomous vehicles. However, most existing approaches often treat sensors separately, overlooking the potential benefits of correlation between them. In this paper, we propose a Cross- Modality Module (CMM) to leverage the potential correlation of LiDAR and camera features for place recognition. Besides, to fully exploit potential of each modality, we propose a Local-Global Fusion Module to supplement global coarse-grained features with local fine-grained features. The experiment results on public datasets demonstrate that our approach effectively improves the average recall by 2.3%, reaching 98.7%, compared with simply stacking of LiDAR and camera
Xue, ShijieLi, BinLu, FanLiu, ZhengfaChen, Guang
Positioning system is a key module of autonomous driving. As for LiDAR SLAM system, it faces great challenges in scenarios where there are repetitive and sparse features. Without loop closure or measurements from other sensors, odometry match errors or accumulated errors cannot be corrected. This paper proposes a construction method of LiDAR anchor constraints to improve the robustness of the SLAM system in the above challenging environment. We propose a robust anchor extraction method that adaptively extracts suitable cylindrical anchors in the environment, such as tree trunks, light poles, etc. Skewed tree trunks are detected by feature differences between laser lines. Boundary points on cylinders are removed to avoid misleading. After the appropriate anchors are detected, a factor graph-based anchor constraint construction method is designed. Where direct scans are made to anchor, direct constraints are constructed. While in the position where the anchor is not directly observed
Shen, XiangxiangLu, XiongZhu, JiaqiGao, LetianWu, JunxianLu, Yishi
In this paper, we introduce one imu radar loosely coupled SLAM method based on our 4D millimeter-wave image radar which it outputs pointcloud containing xyz position information and power information in our autonomous vehicles. at common pointcloud-based slam such as lidar slam usually adopt imu-lidar tightly coupled structure, which slam front end outputs odometry reversly affect imu preintegration. slam system badness occurs when front end odometry drift bigger and bigger or one frame pointcloud match failed. so in our method, we decouple imu and radar odometry crossed relationship, fusing imu and wheel odometry to generate one rough pose trajectory as initial guess value for front end registration, not directly from radar estimated odometry pose, that is to say, front end registration is independent of imu preintegration. besides, we empirically propose one idea juding front end registration result to identify match-less environment and adopt relative wheel odometry pose instead of
Zhao, YingzhongLu, XinfeiYe, Tingfeng
Advances in perception hardware and software deliver new performance possibilities - and a refreshed vision for passenger-vehicle driving automation. The streets of Munich look different when seen through a Nodar point cloud created by a set of stereo cameras. Nodar's Hammerhead technology uses two standard, automotive-grade CMOS cameras connected like human eyes, but the output is much more than a high-tech Viewmaster. During IAA 2023, Nodar provided test rides through the city's crowded streets to showcase a prototype Hammerhead system displaying live images of the world in front of the vehicle measured by distance. Being able to build a live, 3D point cloud like this is not new, but doing it with two off-the-shelf cameras that can be positioned anywhere on the vehicle and algorithms that accurately measure distance is - particularly without a lidar sensor on board - unusual
Blanco, SebastianVisnic, Bill
Light detection and ranging (LiDAR) provides the type of velocity data about objects and vehicles that are necessary to enable the type of decision-making necessary for navigation systems in autonomous vehicles. However, most LiDAR sensors that have been used in automotive and other mobility applications have been fragile, expensive and unreliable
The Collins Aerospace Optical Ice Detector is a short-range polarimetric cloud lidar designed to detect and discriminate among all types of icing conditions with the use of a single sensor. Recent flight tests of the Optical Ice Detector (OID) aboard a fully instrumented atmospheric research aircraft have allowed comparisons of measurements made by the OID with those of standard cloud research probes. The tests included some icing conditions appropriate to the most recent updates to the icing regulations. Cloud detection, discrimination of mixed phase, and quantification of cloud liquid water content for a cloud within the realm of Appendix C were all demonstrated. The duration of the tests (eight hours total) has allowed the compilation of data from the OID and cloud probes for a more comprehensive comparison. The OID measurements and those of the research probes agree favorably given the uncertainties inherent in these instruments
Anderson, KaareRay, MarkJackson, Darren
Items per page:
1 – 50 of 398