Abstract
We propose a scalable neural network framework to reconstruct the 3D mesh of a human body from multi-view images, in the subspace of the SMPL model. Use of multi-view images can significantly reduce the projection ambiguity of the problem, increasing the reconstruction accuracy of the 3D human body under clothing. Our experiments show that this method benefits from the synthetic dataset generated from our pipeline since it has good flexibility of variable control and can provide ground-truth for validation. Our method outperforms existing methods on real-world images, especially on shape estimations.
Paper
Shape-Aware Human Pose and Shape Reconstruction Using Multi-View Images, ICCV 2019.
Junbang Liang, Ming C. Lin.
Video
Demo video can be found here
Code
The GitHub repository is located here.
Dataset
The dataset can be found here (4.3GB).
Dataset Generation Code
The dataset generation code can be found here (285MB).
Pre-trained Model
The Pre-trained model can be found here (690MB).