Browse Topic: Lidar
Light Detection and Ranging (LiDAR) is a promising type of sensor for autonomous driving that utilizes laser technology to provide perceptions and accurate distance measurements of obstacles in the vehicle path. In recent years, there has also been a rise in the implementation of LiDARs in modern and autonomous vehicles to aid self-driving features. However, navigating adverse weather remains one of the biggest challenges in achieving Level 5 full autonomy due to sensor soiling, leading to performance degradation that can pose safety hazards. When driving in rain, raindrops impact the LiDAR sensor assembly and cause attenuation of signals when the light beams undergo reflections and refractions. Consequently, signal detectability, accuracy, and intensity are significantly affected. To date, limited studies have been able to perform objective evaluations of LiDAR performance, most of which faced limitations that hindered realistic, controllable, and repeatable testing. Therefore, this
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
This project presents the development of an advanced Autonomous Mobile Robot (AMR) designed to autonomously lift and maneuver four-wheel drive vehicles into parking spaces without human intervention. By leveraging cutting-edge camera and sensor technologies, the AMR integrates LIDAR for precise distance measurements and obstacle detection, high-resolution cameras for capturing detailed images of the parking environment, and object recognition algorithms for accurately identifying and selecting available parking spaces. These integrated technologies enable the AMR to navigate complex parking lots, optimize space utilization, and provide seamless automated parking. The AMR autonomously detects free parking spaces, lifts the vehicle, and parks it with high precision, making the entire parking process autonomous and highly efficient. This project pushes the boundaries of autonomous vehicle technology, aiming to contribute significantly to smarter and more efficient urban mobility systems.
LIDAR-based autonomous mobile robots (AMRs) are gradually being used for gas detection in industries. They detect tiny changes in the composition of the environment in indoor areas that is too risky for humans, making it ideal for the detection of gases. This current work focusses on the basic aspect of gas detection and avoiding unwanted accidents in industrial sectors by using an AMR with LIDAR sensor capable of autonomous navigation and MQ2 a gas detection sensor for identifying the leakages including toxic and explosive gases, and can alert the necessary personnel in real-time by using simultaneous localization and mapping (SLAM) algorithm and gas distribution mapping (GDM). GDM in accordance with SLAM algorithm directs the robot towards the leakage point immediately thereby avoiding accidents. Raspberry Pi 4 is used for efficient data processing and hardware part accomplished with PGM45775 DC motor for movements with 2D LIDAR allowing 360° mapping. The adoption of LIDAR-based AMRs
Exactly when sensor fusion occurs in ADAS operations, late or early, impacts the entire system. Governments have been studying Advanced Driver Assistance Systems (ADAS) since at least the late 1980s. Europe's Generic Intelligent Driver Support initiative ran from 1989 to 1992 and aimed “to determine the requirements and design standards for a class of intelligent driver support systems which will conform with the information requirements and performance capabilities of the individual drivers.” Automakers have spent the past 30 years rolling out such systems to the buying public. Toyota and Mitsubishi started offering radar-based cruise control to Japanese drivers in the mid-1990s. Mercedes-Benz took the technology global with its Distronic adaptive cruise control in the 1998 S-Class. Cadillac followed that two years later with FLIR-based night vision on the 2000 Deville DTS. And in 2003, Toyota launched an automated parallel parking technology called Intelligent Parking Assist on the
In non-cooperative environments, unmanned aerial vehicles (UAVs) have to land without artificial markers, which is a key step towards achieving full autonomy. However, the existing vision-based schemes have the common problems of poor robustness and generalization, and the LiDAR-based schemes have the disadvantages of low resolution, high power consumption and high weight. In this paper, we propose an UAV landing system equipped with a binocular camera to preform 3D reconstruction and select the safe landing zone. The whole system only consists of a stereo camera, and the innovation of the solution is fusing the stereo matching algorithm and monocular depth estimation(MDE) model to get a robust prediction on the metric depth. The whole landing system consists of a stereo matching module, a monocular depth estimation (MDE) module, a depth fusion module, and a safe landing zone selection module. The stereo matching module uses Semi-Global Matching (SGM) algorithm to calculate the
In September, after several months of evaluating the market, “Honda Xcelerator Ventures” — the automotive manufacturer’s startup investment subsidiary — made a major investment award to California-based silicon photonics startup SiLC Technologies, Inc., to develop next generation Frequency-Modulated Continuous Wave (FMCW) LiDAR for “all types of mobility.”
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors.
Lasers developed at the University of Rochester offer a new path for on-chip frequency comb generators. University of Rochester, Rochester, NY Light measurement devices called optical frequency combs have revolutionized metrology, spectroscopy, atomic clocks, and other applications. Yet challenges with developing frequency comb generators at a microchip scale have limited their use in everyday technologies such as handheld electronics. In a study published in Nature Communications, researchers at the University of Rochester describe new microcomb lasers they have developed that overcome previous limitations and feature a simple design that could open the door to a broad range of uses.
You've got regulations, cost and personal preferences all getting in the way of the next generation of automated vehicles. Oh, and those pesky legal issues about who's at fault should something happen. Under all these big issues lie the many small sensors that today's AVs and ADAS packages require. This big/small world is one topic we're investigating in this issue. I won't pretend I know exactly which combination of cameras and radar and lidar sensors works best for a given AV, or whether thermal cameras and new point cloud technologies should be part of the mix. But the world is clearly ready to spend a lot of money figuring these problems out.
Simulation company rFpro has already mapped over 180 digital locations around the world, including public roads, proving grounds and race circuits. But the company's latest is by far its biggest and most complicated. Matt Daley, technical director at rFpro, announced at AutoSens USA 2024 that its new Los Angeles route is an “absolutely massive, complicated model” of a 36-km (22-mile) loop that can be virtually driven in both directions. Along these digital roads - which were built off survey-grade LIDAR data with a 1 cm by 1 cm (1.1-in by 1.1 in) X-Y grid - rFpro has added over 12,000 buildings, 13,000 pieces of street infrastructure (like signs and lamps), and 40,000 pieces of vegetation. “It's a fantastic location,” Daley said. “It's a huge array of different types of challenging infrastructure for AVs. You can drive this loop with full vehicle dynamic inputs, ready to excite the suspension and, especially with AVs, shake the sensors in the correct way as you would be getting if you
To round out this issue's cover story, we spoke with Clement Nouvel, Valeo's chief technical officer for lidar, about Valeo's background in ADAS and what's coming next. Nouvel leads over 300 lidar engineers and the company's third-generation Scala 3 lidar is used on production vehicles from European and Asian automakers. The Scala 3 sensor system scans the area around a vehicle 25 times per second, can detect objects more than 200 meters (656 ft) away with a wide field of vision and operates at speeds of up to 130 km/h (81 mph) on the highway. In 2023, Valeo secured two contracts for Scala 3, one with an Asian manufacturer and the other with a “leading American robotaxi company,” Valeo said in its most-recent annual report. Valeo has now received over 1 billion euros (just under $1.1 billion) in Scala 3 orders. Also in 2023, Valeo and Qualcomm agreed to jointly supply connected displays, clusters, driving assistance technologies and, importantly, sensor technology for to two- and three
In the evolving landscape of automated driving systems, the critical role of vehicle localization within the autonomous driving stack is increasingly evident. Traditional reliance on Global Navigation Satellite Systems (GNSS) proves to be inadequate, especially in urban areas where signal obstruction and multipath effects degrade accuracy. Addressing this challenge, this paper details the enhancement of a localization system for autonomous public transport vehicles, focusing on mitigating GNSS errors through the integration of a LiDAR sensor. The approach involves creating a 3D map using the factor graph-based LIO-SAM algorithm, which is further enhanced through the integration of wheel encoder and altitude data. Based on the generated map a LiDAR localization algorithm is used to determine the pose of the vehicle. The FAST-LIO based localization algorithm is enhanced by integrating relative LiDAR Odometry estimates and by using a simple yet effective delay compensation method to
The global market for automotive LIDAR is expected to grow from $332 million in 2022 to more than $4.5 billion by 2028. That’s solid market growth, particularly given the decades-old challenges of commercializing LIDAR that would be affordable for automotive designs. We interviewed Eric Aguilar, co-founder and CEO of Omnitron Sensors, Los Angeles, CA, to learn about a new MEMS scanning mirror that could accelerate the market adoption of LIDAR.
Robots and autonomous vehicles can use 3D point clouds from LIDAR sensors and camera images to perform 3D object detection. However, current techniques that combine both types of data struggle to accurately detect small objects. Now, researchers from Japan have developed DPPFA–Net, an innovative network that overcomes challenges related to occlusion and noise introduced by adverse weather.
Cellular Vehicle-to-Everything (C-V2X) is considered an enabler for fully automated driving. It can provide the needed information about traffic situations and road users ahead of time compared to the onboard sensors which are limited to line-of-sight detections. This work presents the investigation of the effectiveness of utilizing the C-V2X technology for a valet parking collision mitigation feature. For this study a LiDAR was mounted at the FEV North America parking lot in a hidden intersection with a C-V2X roadside unit. This unit was used to process the LiDAR point cloud and transmit the information of the detected objects to an onboard C-V2X unit. The received data was provided as input to the path planning and controls algorithms so that the onboard controller can make the right decision while approaching the hidden intersection. FEV’s Smart Vehicle Demonstrator was utilized to test the C-V2X setup and the developed algorithms. Test results show that the vehicle was able to
Items per page:
50
1 – 50 of 422