Tracking of Deformable Human Avatars through Fusion of Low-Dimensional 2D and 3D Kinematic Models
Ningjian Zhou and S. Shankar Sastry
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2019-87
May 19, 2019
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-87.pdf
We propose a method to estimate and track the 3D posture as well as the 3D shape of the human body from a single RGB-D image. We estimate the full 3D mesh of the body and show that 2D joint positions greatly improve 3D estimation and tracking accuracy. The problem is inherently very challenging because due to the complexity of the human body, lighting, clothing, and occlusion. The solve the problem, we leverage a custom MobileNet implementation of OpenPose CNN to construct a 2D skeletal model of the human body. We then fit a low-dimensional deformable body model called SMPL to the observed point cloud using initialization from the 2D skeletal model. We do so by minimizing a cost function that penalizes the error between the estimated SMPL model points and the observed real-world point cloud. We further impose a pose prior define by the pre-trained mixture of Gaussian model to penalize out unlikely poses. We evaluated our method on the Cambridge-Imperial APE (Action Pose Estimation) dataset showing comparable results with non-real time solutions.
Advisors: S. Shankar Sastry
BibTeX citation:
@mastersthesis{Zhou:EECS-2019-87, Author= {Zhou, Ningjian and Sastry, S. Shankar}, Title= {Tracking of Deformable Human Avatars through Fusion of Low-Dimensional 2D and 3D Kinematic Models}, School= {EECS Department, University of California, Berkeley}, Year= {2019}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-87.html}, Number= {UCB/EECS-2019-87}, Abstract= {We propose a method to estimate and track the 3D posture as well as the 3D shape of the human body from a single RGB-D image. We estimate the full 3D mesh of the body and show that 2D joint positions greatly improve 3D estimation and tracking accuracy. The problem is inherently very challenging because due to the complexity of the human body, lighting, clothing, and occlusion. The solve the problem, we leverage a custom MobileNet implementation of OpenPose CNN to construct a 2D skeletal model of the human body. We then fit a low-dimensional deformable body model called SMPL to the observed point cloud using initialization from the 2D skeletal model. We do so by minimizing a cost function that penalizes the error between the estimated SMPL model points and the observed real-world point cloud. We further impose a pose prior define by the pre-trained mixture of Gaussian model to penalize out unlikely poses. We evaluated our method on the Cambridge-Imperial APE (Action Pose Estimation) dataset showing comparable results with non-real time solutions.}, }
EndNote citation:
%0 Thesis %A Zhou, Ningjian %A Sastry, S. Shankar %T Tracking of Deformable Human Avatars through Fusion of Low-Dimensional 2D and 3D Kinematic Models %I EECS Department, University of California, Berkeley %D 2019 %8 May 19 %@ UCB/EECS-2019-87 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-87.html %F Zhou:EECS-2019-87