LiDAR and Camera-Based Convolutional Neural Network Detection for Autonomous Driving

2020-01-0136

04/14/2020

Features
Event
WCX SAE World Congress Experience
Authors Abstract
Content
Autonomous vehicles are currently a subject of great interest and there is heavy research on creating and improving algorithms for detecting objects in their vicinity. A ROS-based deep learning approach has been developed to detect objects using point cloud data. With encoded raw light detection and ranging (LiDAR) and camera data, several basic statistics such as elevation and density are generated. The system leverages a simple and fast convolutional neural network (CNN) solution for object identification and localization classification and generation of a bounding box to detect vehicles, pedestrians and cyclists was developed. The system is implemented on an Nvidia Jetson TX2 embedded computing platform, the classification and location of the objects are determined by the neural network. Coordinates and other properties of the object are published on to various ROS topics which are then serviced by visualization and data handling routines. Performance of the system is scrutinized with regards to hardware capability, software reliability, and real-time performance. The final product is a mobile-platform capable of identifying pedestrians, cars, trucks and cyclists.
Meta TagsDetails
DOI
https://doi.org/10.4271/2020-01-0136
Pages
6
Citation
Hamieh, I., Myers, R., Nimri, H., Rahman, T. et al., "LiDAR and Camera-Based Convolutional Neural Network Detection for Autonomous Driving," SAE Technical Paper 2020-01-0136, 2020, https://doi.org/10.4271/2020-01-0136.
Additional Details
Publisher
Published
Apr 14, 2020
Product Code
2020-01-0136
Content Type
Technical Paper
Language
English