Skip to content

Latest commit

 

History

History
86 lines (48 loc) · 4.51 KB

README.md

File metadata and controls

86 lines (48 loc) · 4.51 KB

Deep 3D Pose

Synthesizing Training Images for Boosting Human 3D Pose Estimation

Created by Wenzheng Chen, Huan Wang, Yangyan Li, Hao Su, Zhenhua Wang, Changhe Tu, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen.

Introduction

Our work was initially described in an arXiv tech report and will appear as a 3D Vision 2016 paper. Deep3DPose is a scalable human image synthesis pipeline for generating millions of human images with their corresponding 2D and 3D pose annotations. These training images can be used for high-capacity models such as deep CNNs.

License

Deep 3D Pose is released under the MIT License (refer to the LICENSE file for details).

Citing Deep3DPose

@InProceedings{Deep3DPose,
    Title={Synthesizing Training Images for Boosting Human 3D Pose Estimation},
    Author={{Wenzheng Chen and Huan Wang and Yangyan Li and Hao Su and Zhenhua Wang and Changhe Tu and Dani Lischinski and Daniel Cohen-Or and Baoquan Chen},
    Booktitle={3D Vision (3DV)},
    Year= {2015}
}

Contents

  1. Prerequisites
  2. Human poses
  3. Human models
  4. Human clothes
  5. Render
  6. caffe

Prerequisites

  1. Blender (tested with Blender 2.76 on 64-bit Windows). You can get it from Blender website for free.

  2. MATLAB (tested with 2015a on 64-bit Windows). You need to install a C++ compiler to make sure mex is available in your Matlab.

Human Poses

To generate human models, you should define their poses first. We use CMU Mocap Database as pose sources. This database contains about 4 million poses. To better cover the pose space, we also learn a Bayesian network from these poses.

To generate poses, You can enter 1-skel directory and run demo_generateskel.m directory. It will generate cmu_skeletons.mat, which contains part of poses from CMU Mocap Database.

To acquire more poses, you can download the asf & amc format zipfile from CMU Mocap Database and unzip them in data/asfamc directory.

Note that the code doesn't include the Bayesian network code. You can download it from the original website and use the generated poses as input to learn the model.

We adjust pose format from CMU format to our own format. See images in sources directory. Then we use poses to generate human models.

Human models

To generate human models, we adopt SCAPE. This model decomposes a human mesh into a set of pose parameters and shape parameters. You can generate infinite meshes by adjusting different poses and shapes.

To generate models, first you need to copy the cmu_skeletons.mat into 2-model directory. Then you can run demo_skel2RR.m and demo_RR2obj.m. The first m file will generate cmu_RR.mat file, which is used to transfer poses to rotation matrices. The scond m file will call scape to generate human models. The models are generated in data/models directory.

Acknowledgement Scape is implenmented by Jie Mao. We are grateful to him for providing us with this code.

Render

We use blender to render models in batch. To render generated models, you can enter the 4-render directory. First, run demo.m to generate some auxiliary files. Then run demo2.m to call blender to render them. You need to define your blender path in demo2.m.

Our rendering parameters will render human images only. To generate a complete images, we need to add background. You can run demo3.m, which will combine human and its background.

Note that this repository contains 3 backgrounds, 3 clothes. In the paper we use 796 backgrrounds and 10000 clothes. Thes clothes can be downloaded in the project website page. You can also make your own images by your own.

caffe

We modify caffe to adjust it to our onw task. We use domain adaptation to make full use of synthetic data. See 5-caffe directory for more detials.