Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model

2019-01-0509

04/02/2019

Event
WCX SAE World Congress Experience
Authors Abstract
Content
It’s important to predict human actions in the industry assembly process. Foreseeing future actions before they happened is an essential part for flexible human-robot collaboration and crucial to safety issues. Vision-based human action prediction from videos provides intuitive and adequate knowledge for many complex applications. This problem can be interpreted as deducing the next action of people from a short video clip. The history information needs to be considered to learn these relations among time steps for predicting the future steps. However, it is difficult to extract the history information and use it to infer the future situation with traditional methods. In this scenario, a model is needed to handle the spatial and temporal details stored in the past human motions and construct the future action based on limited accessible human demonstrations. In this paper, we apply an autoencoder-based deep learning framework for human action construction, merging into the RNN pipeline for human action prediction. This contrasts with traditional approaches which use hand-crafted features and different domain outputs. We implement the proposed framework on a model vehicle seat assembly task. Our experiment results indicate that the proposed model is effective in capturing the historical details that are necessary for future human action prediction. In addition, the proposed model synthesizes the prior information from human demonstrations and generates the corresponding future action by those spatial-temporal features successfully.
Meta TagsDetails
DOI
https://doi.org/10.4271/2019-01-0509
Pages
8
Citation
Zhang, Z., Zhang, Z., Wang, W., Chen, Y. et al., "Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model," SAE Technical Paper 2019-01-0509, 2019, https://doi.org/10.4271/2019-01-0509.
Additional Details
Publisher
Published
Apr 2, 2019
Product Code
2019-01-0509
Content Type
Technical Paper
Language
English