Browse Topic: Lidar
ABSTRACT Autonomous driving is emerging as the future of transportation recently. For autonomous driving to be safe and reliable the perception sensors need sufficient vision in sometimes challenging operating conditions including dust, dirt, and moisture or during inclement weather. LiDAR perception sensors used in certain autonomous driving solutions require both a clean and dry sensor screen to effectively operate in a safe manner. In this paper, UV durable Hydrophobic (UVH) coatings were developed to improve LiDAR sensing performance. A lab testbed was successfully constructed to evaluate UVH coatings and uncoated control samples for LiDAR sensor under the simulated weathering conditions, including fog, rain, mud, and bug. In addition, a mobile testbed was developed in partnership with North Dakota State University (NDSU) to evaluate the UVH coatings in an autonomous moving vehicle under different weathering conditions. These UV-durable easy-to-clean coatings with high optical
ABSTRACT Localization refers to the process of estimating ones location (and often orientation) within an environment. Ground vehicle automation, which offers the potential for substantial safety and logistical benefits, requires accurate, robust localization. Current localization solutions, including GPS/INS, LIDAR, and image registration, are all inherently limited in adverse conditions. This paper presents a method of localization that is robust to most conditions that hinder existing techniques. MIT Lincoln Laboratory has developed a new class of ground penetrating radar (GPR) with a novel antenna array design that allows mapping of the subsurface domain for the purpose of localization. A vehicle driving through the mapped area uses a novel real-time correlation-based registration algorithm to estimate the location and orientation of the vehicle with respect to the subsurface map. A demonstration system has achieved localization accuracy of 2 cm. We also discuss tracking results
ABSTRACT This paper presents a new terrain traversability mapping method integrated into the Robotic Technology Kernel (RTK) that produces ground slope traversability cost information from LiDAR height maps. These ground slope maps are robust to a variety of off-road scenarios including areas of sparse or dense vegetation. A few simple and computationally efficient heuristics are applied to the ground slope maps to produce cost data that can be directly consumed by existing path planners in RTK, improving the navigation performance in the presence of steep terrain. Citation: J. Ramsey, R. Brothers, J. Hernandez, “Creation of a Ground Slope Mapping Methodology Within the Robotic Technology Kernel for Improved Navigation Performance,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 16-18, 2022
ABSTRACT Simulation is a critical step in the development of autonomous systems. This paper outlines the development and use of a dynamically linked library for the Mississippi State University Autonomous Vehicle Simulator (MAVS). The MAVS is a library of simulation tools designed to allow for real-time, high performance, ray traced simulation capabilities for off-road autonomous vehicles. It includes features such as automated off-road terrain generation, automatic data labeling for camera and LIDAR, and swappable vehicle dynamics models. Many machine learning tools today leverage Python for development. To use these tools and provide an easy to use interface, Python bindings were developed for the MAVS. The need for these bindings and their implementation is described. Citation: C. Hudson, C. Goodin, Z. Miller, W. Wheeler, D. Carruth, “Mississippi State University Autonomous Vehicle Simulation Library”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium
ABSTRACT Off-road autonomous navigation poses a challenging problem, as the surrounding terrain is usually unknown, the support surface the vehicle must traverse cannot be considered flat, and environmental features (such as vegetation and water) make it difficult to estimate the support surface elevation. This paper will focus on Robotic Research’s suite of off-road autonomous planning and obstacle avoidance tools. Specifically, this paper will provide an overview of our terrain detection system, which utilizes advanced LADAR processing techniques to provide an estimate of the surface. Additionally, it will describe the kino-dynamic off-road planner which can, in real-time, calculate the optimal route, taking into account the support surface, obstacles sensed in the environment, and more. Finally, the paper will explore how these technologies have been applied to a wide variety of different robotic applications
ABSTRACT The complex future battlefield will require the ability for quick identification of threats in chaotic environments followed by decisive and accurate threat mitigation by lethal force or countermeasure. Integration and synchronization of high bandwidth sensor capabilities into military vehicles is essential to identifying and mitigating the full range of threats. High bandwidth sensors including Radar, Lidar, and electro-optical sensors provide real-time information for active protection systems, advanced lethality capabilities, situational understanding and automation. The raw sensor data from Radar systems can exceed 10 gigabytes per second and high definition video is currently at 4 gigabytes per second with increased resolution standards emerging. The processing and memory management of the real time sensor data assimilated with terrain maps and external communication information requires a high performance electronic architecture with integrated data management. GDLS has
ABSTRACT Robotics makers and application engineers stand to benefit from replacing physical simulation with a digital simulation that can easily represent any number of robots on a terrain and provide ground truth data for comparison with sensor data during analysis. In this research, a digital proxy simulation (DPS) was developed to dynamically simulate any number of articulated robots in real-time using sophisticated robot-environment interaction models. 3D models of the robot and environment objects can be imported or placed conveniently. Parameters of the models can be fine-tuned to mimic the environment with high fidelity. Sensor simulation and control capabilities of the DPS are also highlighted. Common sensors can be simulated including lidar, image sensors, and stereo cameras. Control plugins can be added easily to accomplish complex tasks
ABSTRACT Cold regions are becoming increasingly more important for off-road vehicle mobility, including autonomous navigation. Most of the time, these regions are covered by snow, and vehicles are forced to operate under active snowfall conditions. In such scenarios, realistic and effective models to predict performance of on-board sensors during snowfalls become of paramount importance. This paper describes a stochastic approach for two-dimensional numerical simulation of dynamic snow scenes that eventually will be used for driving condition visualization and vehicle sensor performance predictions. The model captures realistic snow particle size distribution, terminal near-surface particle speeds, and adequately describes interactions with wind. Citation: S. N. Vecherin, M. E. Tedesche, M. W. Parker, “Dynamic Snowfall Scene Simulations for Autonomous Vehicle Sensor Performance”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI
In non-cooperative environments, unmanned aerial vehicles (UAVs) have to land without artificial markers, which is a key step towards achieving full autonomy. However, the existing vision-based schemes have the common problems of poor robustness and generalization, and the LiDAR-based schemes have the disadvantages of low resolution, high power consumption and high weight. In this paper, we propose an UAV landing system equipped with a binocular camera to preform 3D reconstruction and select the safe landing zone. The whole system only consists of a stereo camera, and the innovation of the solution is fusing the stereo matching algorithm and monocular depth estimation(MDE) model to get a robust prediction on the metric depth. The whole landing system consists of a stereo matching module, a monocular depth estimation (MDE) module, a depth fusion module, and a safe landing zone selection module. The stereo matching module uses Semi-Global Matching (SGM) algorithm to calculate the
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors
Autonomous vehicle navigation requires signal processing of the vehicle’s sensors to provide meaningful information to the planners such that challenging artifacts like shadows, rare events, obstructive vegetation, etc. are identified properly, avoiding ill-informed navigation. Using a single algorithm such as semantic segmentation of camera images is often not enough to identify those challenging features but can be overcome by processing more than one type of sensor and fusing their results. In this work, semantic segmentation of camera image and LiDAR point cloud signals is performed using Echo State Networks to overcome the challenge of shadows identified as obstructions in off-road terrains. The coordination of algorithms processing multiple sensor signals is shown to avoid unnecessary road obstructions caused by high-contrast shadows for more informed navigational planning
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out
Simulation company rFpro has already mapped over 180 digital locations around the world, including public roads, proving grounds and race circuits. But the company's latest is by far its biggest and most complicated. Matt Daley, technical director at rFpro, announced at AutoSens USA 2024 that its new Los Angeles route is an “absolutely massive, complicated model” of a 36-km (22-mile) loop that can be virtually driven in both directions. Along these digital roads - which were built off survey-grade LIDAR data with a 1 cm by 1 cm (1.1-in by 1.1 in) X-Y grid - rFpro has added over 12,000 buildings, 13,000 pieces of street infrastructure (like signs and lamps), and 40,000 pieces of vegetation. “It's a fantastic location,” Daley said. “It's a huge array of different types of challenging infrastructure for AVs. You can drive this loop with full vehicle dynamic inputs, ready to excite the suspension and, especially with AVs, shake the sensors in the correct way as you would be getting if you
In the evolving landscape of automated driving systems, the critical role of vehicle localization within the autonomous driving stack is increasingly evident. Traditional reliance on Global Navigation Satellite Systems (GNSS) proves to be inadequate, especially in urban areas where signal obstruction and multipath effects degrade accuracy. Addressing this challenge, this paper details the enhancement of a localization system for autonomous public transport vehicles, focusing on mitigating GNSS errors through the integration of a LiDAR sensor. The approach involves creating a 3D map using the factor graph-based LIO-SAM algorithm, which is further enhanced through the integration of wheel encoder and altitude data. Based on the generated map a LiDAR localization algorithm is used to determine the pose of the vehicle. The FAST-LIO based localization algorithm is enhanced by integrating relative LiDAR Odometry estimates and by using a simple yet effective delay compensation method to
Robots and autonomous vehicles can use 3D point clouds from LIDAR sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers from Japan have developed DPPFA–Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather
The global market for automotive LIDAR is expected to grow from $332 million in 2022 to more than $4.5 billion by 2028. That’s solid market growth, particularly given the decades-old challenges of commercializing LIDAR that would be affordable for automotive designs. We interviewed Eric Aguilar, co-founder and CEO of Omnitron Sensors, Los Angeles, CA, to learn about a new MEMS scanning mirror that could accelerate the market adoption of LIDAR
Cellular Vehicle-to-Everything (C-V2X) is considered an enabler for fully automated driving. It can provide the needed information about traffic situations and road users ahead of time compared to the onboard sensors which are limited to line-of-sight detections. This work presents the investigation of the effectiveness of utilizing the C-V2X technology for a valet parking collision mitigation feature. For this study a LiDAR was mounted at the FEV North America parking lot in a hidden intersection with a C-V2X roadside unit. This unit was used to process the LiDAR point cloud and transmit the information of the detected objects to an onboard C-V2X unit. The received data was provided as input to the path planning and controls algorithms so that the onboard controller can make the right decision while approaching the hidden intersection. FEV’s Smart Vehicle Demonstrator was utilized to test the C-V2X setup and the developed algorithms. Test results show that the vehicle was able to
Advances in perception hardware and software deliver new performance possibilities - and a refreshed vision for passenger-vehicle driving automation. The streets of Munich look different when seen through a Nodar point cloud created by a set of stereo cameras. Nodar's Hammerhead technology uses two standard, automotive-grade CMOS cameras connected like human eyes, but the output is much more than a high-tech Viewmaster. During IAA 2023, Nodar provided test rides through the city's crowded streets to showcase a prototype Hammerhead system displaying live images of the world in front of the vehicle measured by distance. Being able to build a live, 3D point cloud like this is not new, but doing it with two off-the-shelf cameras that can be positioned anywhere on the vehicle and algorithms that accurately measure distance is - particularly without a lidar sensor on board - unusual
Light detection and ranging (LiDAR) provides the type of velocity data about objects and vehicles that are necessary to enable the type of decision-making necessary for navigation systems in autonomous vehicles. However, most LiDAR sensors that have been used in automotive and other mobility applications have been fragile, expensive and unreliable
Items per page:
50
1 – 50 of 398