Human Body Orientation from 2D Images

2021-01-0082

04/06/2021

Features
Event
SAE WCX Digital Summit
Authors Abstract
Content
This work presents a method to estimate the human body orientation using 2D images from a person view; the challenge comes from the variety of human body poses and appearances. The method utilizes OpenPose neural network as a human pose detector module and depth sensing module. The modules work together to extract the body orientation from 2D stereo images. OpenPose is proven to be efficient in detecting human body joints, defined by COCO dataset, OpenPose can detect the visible body joints without being affected by backgrounds or other challenging factors. Adding the depth data for each point can produce rich information to the process of 3D construction for the detected humans. This 3D point’s setup can tell more about the body orientation and walking direction for example. The depth module used in this work is the ZED camera stereo system which uses CUDA for high performance depth computation. One of the possible applications for this method is for social robots where the robot has to navigate between crowds; the human body orientation can be an important input for the path planner here. Other application might require the robot to face the human user for interaction, this method provides the robot with the info required to face the human user. This method is aimed for indoors activity to ensure higher accuracy.
Meta TagsDetails
DOI
https://doi.org/10.4271/2021-01-0082
Pages
6
Citation
Abughalieh, K., and Alawneh, S., "Human Body Orientation from 2D Images," SAE Technical Paper 2021-01-0082, 2021, https://doi.org/10.4271/2021-01-0082.
Additional Details
Publisher
Published
Apr 6, 2021
Product Code
2021-01-0082
Content Type
Technical Paper
Language
English