AvatarJLM uses tracking signals of the head and hands to estimate accurate, smooth, and plausible full-body motions.
📖 For more visual results, go checkout our project page
[09/2023] Testing samples are available.
[09/2023] Training and testing codes are released.
[07/2023] AvatarJLM is accepted to ICCV 2023 🥳!
- Please download the datasets from AMASS.
- Download the required body models and placed them in
./support_data/body_models
directory of this repository. For the SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de. - Run
./data/prepare_data.py
to preprocess the input data for faster training. The data split for training and testing data under Protocol 1 in our paper is stored under the folder./data/data_split
(from AvatarPoser).
python ./data/prepare_data.py --protocol [1, 2, 3] --root [path to AMASS]
- Please download our real-captured testing data from Google Drive. The data is preprocessed to the same format as our preprocessed AMASS data.
- Unzip the data and place it in
./data
directory of this repository.
- Python >= 3.9
- PyTorch >= 1.11.0
- pyrender
- trimesh
- human_body_prior
- body_visualizer
- mesh
python train.py --protocol [1, 2, 3] --task [name of the experiment]
python test.py --protocol [1, 2, 3, real] --task [name of the experiment] --checkpoint [path to trained checkpoint] [--vis]
Protocol | MPJRE | MPJPE | MPJVE | Trained Model |
---|---|---|---|---|
1 | 3.01 | 3.35 | 21.01 | Google Drive |
2-CMU-Test | 5.36 | 7.28 | 26.46 | Google Drive |
2-BML-Test | 4.65 | 6.22 | 34.45 | Google Drive |
2-MPI-Test | 5.85 | 6.47 | 24.13 | Google Drive |
3 | 4.25 | 4.92 | 27.04 | Google Drive |
If you find our work useful for your research, please consider citing the paper:
@inproceedings{
zheng2023realistic,
title={Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling},
author={Zheng, Xiaozheng and Zhuo Su and Wen, Chao and Xue, Zhou and Xiaojie Jin},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
year={2023}
}
Distributed under the MIT License. See LICENSE
for more information.
This project is built on source codes shared by AvatarPoser. We thank the authors for their great job!