We propose a scalable neural network framework to reconstruct the 3D mesh of a human body from multi-view images, in the subspace of the SMPL model. Use of multi-view images can significantly reduce the projection ambiguity of the problem, increasing the reconstruction accuracy of the 3D human body under clothing. Our experiments show that this method benefits from the synthetic dataset generated from our pipeline since it has good flexibility of variable control and can provide ground-truth for validation. Our method outperforms existing methods on real-world images, especially on shape estimations.
Shape-Aware Human Pose and Shape Reconstruction Using Multi-View Images, ICCV 2019.
Junbang Liang, Ming C. Lin.
Demo video can be found here
The GitHub repository is located here.
The dataset can be found here (4.3GB).
Dataset Generation Code
The dataset generation code can be found here (285MB).
The Pre-trained model can be found here (690MB).