Your Selections

University of Michigan-Dearborn, USA
Show Only

Collections

File Formats

Content Types

Dates

Sectors

Topics

Authors

Publishers

Affiliations

   This content is not included in your SAE MOBILUS subscription, or you are not logged in.
new

A Personalized Lane-Changing Model for Advanced Driver Assistance System Based on Deep Learning and Spatial-Temporal Modeling

SAE International Journal of Transportation Safety

University of Michigan-Dearborn, USA-Yi Lu Murphey
Jianghan University, China-Jun Gao, Jiangang Yi
  • Journal Article
  • 09-07-02-0009
Published 2019-11-14 by SAE International in United States
Lane changes are stressful maneuvers for drivers, particularly during high-speed traffic flows. However, modeling driver’s lane-changing decision and implementation process is challenging due to the complexity and uncertainty of driving behaviors. To address this issue, this article presents a personalized Lane-Changing Model (LCM) for Advanced Driver Assistance System (ADAS) based on deep learning method. The LCM contains three major computational components. Firstly, with abundant inputs of Root Residual Network (Root-ResNet), LCM is able to exploit more local information from the front view video data. Secondly, the LCM has an ability of learning the global spatial-temporal information via Temporal Modeling Blocks (TMBs). Finally, a two-layer Long Short-Term Memory (LSTM) network is used to learn video contextual features combined with lane boundary based distance features in lane change events. The experimental results on a -world driving dataset show that the LCM is capable of learning the latent features of lane-changing behaviors and achieving significantly better performance than other prevalent models.
This content contains downloadable datasets
Annotation ability available
   This content is not included in your SAE MOBILUS subscription, or you are not logged in.

Detection of Lane-Changing Behavior Using Collaborative Representation Classifier-Based Sensor Fusion

SAE International Journal of Transportation Safety

University of Michigan-Dearborn, USA-Jun Gao, Yi Lu Murphey
Wuhan University of Technology, China-Honghui Zhu
  • Journal Article
  • 09-06-02-0010
Published 2018-10-29 by SAE International in United States
Sideswipe accidents occur primarily when drivers attempt an improper lane change, drift out of lane, or the vehicle loses lateral traction. In this article, a fusion approach is introduced that utilizes data from two differing modality sensors (a front-view camera and an onboard diagnostics (OBD) sensor) for the purpose of detecting driver’s behavior of lane changing. For lane change detection, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier (CRC). Computationally efficient detection features are extracted from distances to the detected lane boundaries and vehicle dynamics signals. In the feature-level fusion, features generated from two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster-Shafer (D-S) theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The results indicated that the feature-level fusion outperformed the decision-level fusion, and the introduced fusion approach using a CRC performs significantly better in terms of detection accuracy, in comparison to other state-of-the-art classifiers.
This content contains downloadable datasets
Annotation ability available