Your Selections

Lidar
Show Only

Collections

File Formats

Content Types

Dates

Sectors

Topics

Authors

Publishers

Affiliations

Committees

Events

Magazine

   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Autonomous Vehicle Multi-Sensors Localization in Unstructured Environment

FEV North America Inc.-Qusay Alrousan, Hamzeh Alzu'bi, Andrew Pfeil, Tom Tasky
  • Technical Paper
  • 2020-01-1029
To be published on 2020-04-14 by SAE International in United States
Autonomous driving in unstructured environments is a significant challenge due to the inconsistency of important information for localization such as lane markings. To reduce the uncertainty of vehicle localization in such environments, sensor fusion of LiDAR, Radar, Camera, GPS/IMU, and Odometry sensors is utilized. This paper discusses a hybrid localization technique developed using: LiDAR based Simultaneous Localization and Mapping (SLAM), GPS/IMU and Odometry data, and object lists from Radar and Camera sensors. An Extended Kalman Filter (EKF) is utilized to fuse data from all sensors in two phases. In the preliminary stage, the SLAM-based vehicle coordinates are fused with the GPS-based positioning. The output of this stage is then fused with the objects-based localization. This approach was successfully tested on FEV’s Smart Vehicle Demonstrator at FEV’s HQ representing a complicated test environment with dynamic and static objects. The test results show that multi-sensor fusion improves the vehicle’s localization compared to GPS or LiDAR alone.
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Real-time Motion Classification of LiDAR Point Detection for Automated Vehicles

Hanyang University-Chansoo Kim, Sungjin Cho, Myoungho Sunwoo
Konkuk University-Kichun Jo
  • Technical Paper
  • 2020-01-0703
To be published on 2020-04-14 by SAE International in United States
A Light Detection And Ranging (LiDAR) is now becoming an essential sensor for an autonomous vehicle. The LiDAR provides the surrounding environment information of the vehicle in the form of a point cloud. A decision-making system of the autonomous car is able to determine a safe and comfort maneuver by utilizing the detected LiDAR point cloud. If the movement class (dynamic or static) of detected points can be provided by LiDAR, the decision-making system is able to plan the appropriate motion of the autonomous vehicle according to the movement of the object. This paper proposes a real-time process to segment the motion states of LiDAR points. The basic principle of the classification algorithm is to classify the point-wise movement of a target point cloud through the other point clouds and sensor poses. First, a fixed-size buffer store the LiDAR point clouds and sensor poses for a constant time window. Second, motion beliefs of the target point cloud against other point clouds and sensor pose in the buffer are estimated, respectively. Each motion belief of the…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Joint Calibration of Dual LiDARs and Camera using a Circular Chessboard

Tongji University-Zhenwen Deng, Lu Xiong, Dong Yin, Fengwu Shan
  • Technical Paper
  • 2020-01-0098
To be published on 2020-04-14 by SAE International in United States
Environment perception is a crucial subsystem in autonomous vehicles. In order to build safety and efficient traffic transportation, several researches have been proposed to build accurate, robust and real-time perception systems. Camera and LiDAR are widely mounted on self-driving cars and developed with many algorithms in recent years. The fusion system of camera and LiDAR provides state-of the-art methods for environmental perception due to the defects of single vehicular sensor. Extrinsic parameter calibration is able to align the coordinate systems of sensors and has been drawing enormous attention. However, differ from spatial alignment of two sensors’ data, joint calibration of multi-sensors (more than three devices) should balance the degree of alignment between each one. In this paper, we assemble a test platform which is made up of dual LiDARs and monocular camera and is the same as the sensing hardware architecture of intelligent sweeper designed by our laboratory. Meanwhile, we propose the related joint calibration method using a circular chessboard. The center of circular chessboard is respectively detected in camera image to get pixel coordinates…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

LiDAR and Camera-based Convolutional Neural Network Detection for Autonomous Driving

National Research Council Canada-Ismail Hamieh, Ryan Myers, Hisham Nimri, Taufiq Rahman
University of Windsor-Aarron Younan, Brad Sato, Abdul El-Kadri, Selwan Nissan, Kemal Tepe
  • Technical Paper
  • 2020-01-0136
To be published on 2020-04-14 by SAE International in United States
Autonomous vehicles are currently a subject of great interest and there is heavy research on creating and improving algorithms for detecting objects in their vicinity. Object classification and detection are crucial tasks that need to be solved accurately and robustly in order to achieve higher automation levels. Current approaches for classification and detection use either cameras or light detection and ranging (LiDAR) sensors. Cameras can work at high frame-rate, and provide dense information over a long range under good illumination and fair weather. LiDARs scan the environment by using their own emitted pulses of laser light and they are only marginally affected by the external lighting conditions. LiDARs provide accurate distance measurements. However, they have a limited range, typically between 10 and 100 m, and provide sparse data. A ROS-based deep learning approach has been developed to detect objects using point cloud data. With encoded raw camera and LiDAR data, several basic statistics such as elevation and density are generated. The system leverages simple and fast convolutional neural network (CNN) solution for object classification and…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

LiDAR Based Classification Optimization of Localization Policies of Autonomous Vehicles

National Research Council Canada-Ismail Hamieh, Ryan Myers, Taufiq Rahman
  • Technical Paper
  • 2020-01-1028
To be published on 2020-04-14 by SAE International in United States
People through many years of experience, have developed a great intuitive sense for navigation and spatial awareness. With this intuition people are able to apply a nearly rules based approach to their driving. With a transition to autonomous driving, these intuitive skills need to be taught to the system which makes perception is the most fundamental and critical task. One of the major challenges for autonomous vehicles is accurately knowing the position of the vehicle relative to the world frame. Currently, this is achieved by utilizing expensive sensors such as a differential GPS which provides centimeter accuracy, or by using computationally taxing algorithms to attempt to match live input data from LiDARs or cameras to previously recorded data or maps. Within this paper an algorithm and accompanying hardware stack is proposed to reduce the computational load on the localization of the robot relative to a prior map. The principal of the software stack is to leverage deep learning and powerful filters to perform classification of landmark objects within a scan of the LiDAR. These landmarks…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

LiDAR data segmentation in off-road environment using Convolutional Neural Networks (CNN)

Mississippi State University-Lalitha Dabbiru, Chris Goodin, Nicklaus Scherrer, Daniel Carruth
  • Technical Paper
  • 2020-01-0696
To be published on 2020-04-14 by SAE International in United States
Recent developments in the area of autonomous vehicle navigation have emphasized algorithm development for the characterization of LiDAR 3D point-cloud data. The LiDAR sensor data provide a detailed understanding of the environment surrounding the vehicle for safe navigation. However, the LiDAR point cloud datasets need point-level labels which require significant amount of annotation effort. We present a framework which generates simulated labeled point cloud data. The simulated lidar data was generated by a physics-based platform, the Mississippi State University Autonomous Vehicle Simulator (MAVS). In this work, we have developed and tested algorithms for autonomous ground vehicle off-road navigation. The MAVS framework generates 3D point cloud for off-road environment that includes trails and trees. The important first step in off-road autonomous navigation is the accurate segmentation of 3D point cloud data to identify the potential obstacles in the vehicle path. We have used simulated lidar data to segment and detect obstacles using Convolutional Neural Networks (CNN). Our analysis is based on SqueezeSeg, a CNN-based model for point cloud segmentation. The CNN has been trained with the…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Human-driving highway overtake and its perceived comfort: correlational study using data fusion

Fiat Research Center-Giovanni Gabiati, Isabella Camuffo, Massimo Grillo
Politecnico di Torino-Massimiliana Carello, Alessandro Ferraris, Henrique de Carvalho Pinheiro, Diego Cruz Stanke
  • Technical Paper
  • 2020-01-1036
To be published on 2020-04-14 by SAE International in United States
As an era of autonomous driving approaches, it is necessary to translate handling comfort – currently a responsibility of human drivers – to a vehicle imbedded algorithm. Therefore, it is imperative to understand the relationship between perceived driving comfort and human driving behaviour. This paper develops a methodology able to generate the information necessary to study how this relationship is expressed in highway overtakes. To achieve this goal, the approach revolved around the implementation of sensor Data Fusion, by processing data from CAN, camera and LIDAR from experimental tests. A myriad of variables was available, requiring individuating the key-information and parameters for recognition, classification and understanding of the manoeuvres. The paper presents the methodology and the role each sensor plays, by expanding on three main steps: Data segregation and parameter selection; Manoeuvre detection and processing; Manoeuvre classification and database generation. It also describes the testing setup, and posterior statistical analysis. To perform all the steps MATLAB was chosen, serving as an all-in-one environment equipped with the necessary toolboxes and libraries to perform filtering, camera perception,…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Analysis of LiDAR and Camera Data in Real-World Weather Conditions for Autonomous Vehicle Operations

Western Michigan University-Nick Goberville, Mohammad El-Yabroudi, Mark Omwanas, Johan Rojas, Rick Meyer, Zachary Asher, Ikhlas Abdel-Qader
  • Technical Paper
  • 2020-01-0093
To be published on 2020-04-14 by SAE International in United States
Autonomous vehicle technology has the potential to improve the safety, efficiency, and cost of our current transportation system by removing human error. With the sensors available today, it is possible for the development of these vehicles, however, there are still issues with autonomous vehicle operations in adverse weather conditions (e.g. snow-covered roads, heavy rain, fog, etc.) due to the degradation of sensor data quality. Since autonomous vehicles rely entirely on sensor data to perceive their surrounding environment, this becomes a significant issue in the performance of the autonomous system. The purpose of this study is to collect sensor data under various weather conditions to understand the effects of weather on sensor data. The sensors used in this study were one camera and one LiDAR. These sensors were connected to an NVIDIA Drive Px2 which operated in a 2019 Kia Niro. Two custom scenarios (static and dynamic) were chosen to collect sensor data operating in four real-world weather conditions: fair, cloudy, rainy, and snowy. This data was then analyzed in custom detection algorithms written in python…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Lidar Inertial Odometry and Mapping for Autonomous Vehicle in GPS-denied Parking Lot

Jilin University-Xuesong Chen, Sumin Zhang, Jian Wu, Rui He, Shiping Song, Bing Zhu, Jian Zhao
  • Technical Paper
  • 2020-01-0103
To be published on 2020-04-14 by SAE International in United States
High-precision and real-time ego-motion estimation is vital for autonomous vehicle. There is a lot GPS-denied maneuver such as underground parking lot in urban areas. Therefore, the localization system relying solely on GPS cannot meets the requirements. Recently, lidar odometry and visual odometry have been introduced into localization systems to overcome the problem of missing GPS signals. Compared with visual odometry, lidar odometry is not susceptible to light, which is widely applied in weak-light environments. Besides, the autonomous parking is highly dependent on the geometric information around the vehicle, which makes building map of surroundings essential for autonomous vehicle. We propose a lidar inertial odometry and mapping. By sensor fusion, we compensate for the drawback of applying a single sensor, allowing the system to provide a more accurate estimate. Compared to other odometry using IMU and lidar, we apply a tight coupled of lidar and IMU method to achieve lower drift, which can effectively overcome the degradation problem based on pure lidar method, ensuring precise pose estimation in fast motion. In addition, we propose a map…
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Using Polygot Persistence with NoSQL databases for Streaming Multimedia, Sensor, and Messaging Services in Autonomous Vehicles.

Wayne State University-Kyle W. Brown
  • Technical Paper
  • 2020-01-0942
To be published on 2020-04-14 by SAE International in United States
The explosion of data has created challenges for both cloud-based systems and autonomous vehicles in data collection and management. The same challenges are now being realized in developing autonomous databases for the implementation of on-demand services in autonomous vehicles. With just one autonomous vehicle expecting to generate over 30 Terabytes of data a day, modern databases provide opportunities to horizontally scale autonomous data seamlessly. An autonomous vehicle database will be required to handle several data types, radar, lidar, ultra-sonic, GPS, odometry, inertial measurement units, sensor data, while providing streaming services. Multimedia, social media, GPS data, audio, and messaging services will be instrumental to incorporating Platform as a Services (PaaS) into autonomous vehicles. Modern databases such as NoSQL provide solutions designed to accommodate a wide variety of data models, including key-value, document, columnar and graph databases. NoSQL can store and utilize structured, semi-structured, and unstructured data necessary for multimedia storage. NoSQL databases such as graph databases supports big data necessary for the demands of modern software development of streaming services for applications with integration and scalability…