Training of Neural Networks with Automated Labeling of Simulated Sensor Data
Published April 2, 2019 by SAE International in United States
Downloadable datasets for this paper availableAnnotation of this paper is available
While convolutional neural networks (CNNs) have revolutionized ground-vehicle autonomy in the last decade, this class of algorithms requires large, truth-labeled data sets to be trained. The process of collecting and labeling training data is tedious, time-consuming, expensive, and error-prone. In order to automate this process, an automated method for training CNNs with simulated data was developed. This method utilizes physics-based simulation of sensors, along with automated truth labeling, to improve the speed and accuracy of training data acquisition for both camera and LIDAR sensors. This framework is enabled by the MSU Autonomous Vehicle Simulator (MAVS), a physics-based sensor simulator for ground vehicle robotics that includes high-fidelity simulations of LIDAR, cameras, and other sensors.
- Chris Goodin - Center for Advanced Vehicular Systems
- Suvash Sharma - Center for Advanced Vehicular Systems
- Matthew Doude - Center for Advanced Vehicular Systems
- Daniel Carruth - Center for Advanced Vehicular Systems
- Lalitha Dabbiru - Center for Advanced Vehicular Systems
- Christopher Hudson - Center for Advanced Vehicular Systems
CitationGoodin, C., Sharma, S., Doude, M., Carruth, D. et al., "Training of Neural Networks with Automated Labeling of Simulated Sensor Data," SAE Technical Paper 2019-01-0120, 2019, https://doi.org/10.4271/2019-01-0120.
Data Sets - Support Documents
|[Unnamed Dataset 1]|
|[Unnamed Dataset 2]|
- Kelly, A., Stentz, A., Amidi, O., Bode, M. et al. , “Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments,” The International Journal of Robotics Research 25(5-6):449-483, 2006.
- Pierson, H.A. and Gashler, M.S. , “Deep Learning in Robotics: A Review of Recent Research,” Advanced Robotics 31(16):821-835, 2017.
- Pomerleau, D.A. , “Alvinn: An Autonomous Land Vehicle in a Neural Network,” . In: Advances in Neural Information Processing Systems. (1989), 305-313.
- Mariolis, I., Peleka, G., Kargakos, A., and Malassiotis, S. , “Pose and Category Recognition of Highly Deformable Objects Using Deep Learning,” in Advanced Robotics (ICAR), 2015 International Conference on, 655-662, IEEE, 2015.
- Giusti, A., Jérôme Guzzi, D.C., Ciresan, F.-L.H., Rodríguez, J.P. et al. , “A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots,” IEEE Robotics and Automation Letters 1(2):661-667, 2016.
- Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and Koltun, V. , “CARLA: An Open Urban Driving Simulator,” in Conference on Robot Learning, 2017, 1-16.
- Chambers, D.R., Gassaway, J., Goodin, C., and Durst, P.J. , “Simulation of a Multispectral, Multicamera, Off-Road Autonomous Vehicle Perception System with Virtual Autonomous Navigation Environment (Vane),” . In: Electro-Optical and Infrared Systems: Technology and Applications XII; and Quantum Information Science and Technology. Vol. 9648. (International Society for Optics and Photonics, 2015), 964802.
- Goodin, C., Doude, M., Hudson, C.H., and Carruth, D.W. , “Enabling Off-Road Autonomous Navigation - Simulation of LIDAR in Dense Vegetation,” Electronics 7(9):154, 2018, doi:10.3390/electronics7060097.
- Perlin, K. , “Implementing Improved Perlin Noise,” GPU Gems 73-85, 2004.
- Hart, J.C. , “Perlin Noise Pixel Shaders,” in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware, 87-94, ACM, 2001.
- Deussen, O., Hanrahan, P., Lintermann, B., Měch, R., Pharr, M., and Prusinkiewicz, P. , “Realistic Modeling and Rendering of Plant Ecosystems,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 275-286, ACM, 1998.
- Preetham, A.J., Shirley, P., and Smits, B. , “A Practical Analytic Model for Daylight,” in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 91-100, ACM Press/Addison-Wesley Publishing Co., 1999.
- Hosek, L. and Wilkie, A. , “An Analytic Model for Full Spectral Sky-Dome Radiance,” ACM Transactions on Graphics (TOG) 31(4):95, 2012.
- Hošek, L. and Wilkie, A. , “Adding a Solar-Radiance Function to the Hošek-Wilkie Skylight Model,” IEEE Computer Graphics and Applications 33(3):44-52, 2013.
- Mittet, M.-A., Nouira, H., Roynard, X., Goulette, F., and Deschaud, J.-E. , “Experimental Assessment of the Quanergy M8 LIDAR Sensor," in ISPRS 2016 Congress, 2016.
- Hudson, C., Goodin, C., Doude, M., and Carruth, D.W. , “Analysis of Dual LIDAR Placement for Off-Road Autonomy Using MAVS,” in Proceedings of the World Symposium on Digital Intelligence for Systems and Machines (DISA), Kosice, Slovakia, 2018.
- Huang, R., Lang, F., and Shu, C. , “Nonlinear Metric Learning with Deep Convolutional Neural Network for Face Verification,” in Chinese Conference on Biometric Recognition, 2015, November, 78-87, Springer, Cham.
- Abadi, M., Agarwal, A., Barham, P., et al. , “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” Google Technical Report, 2015.
- Wu, B., Wan, A., Yue, X., and Keutzer, K. , “SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentaion from 3D LiDAR Point Cloud,” arXiv preprint arXiv:1710.07368, 2017.