Source Video | Recounstruction Video |
rio.mp4 |
download.1.mp4 |
You can Find More Outputs Here :More Outputs
First, clone the repo. Then, we recommend creating a clean conda environment, installing all dependencies, and finally activating the environment, as follows:
git clone https://github.com/saba99/4D_HumanBodyReconstructing.git
cd 4D_HumanBodyReconstructing
pip install numpy==1.23.1 torch
pip install -e .[all]
# Run on video file
python track.py video.source="example_data/videos/gymnasts.mp4"
# Run on extracted frames
python track.py video.source="/path/to/frames_folder/"
# Run on a youtube link (depends on pytube working properly)
python track.py video.source=\'"https://www.youtube.com/watch?v=xEH_5T9jMVU"\'
Download the training data to ./hmr2_training_data/
, then start training using the following command:
bash fetch_training_data.sh
python train.py exp_name=hmr2 data=mix_all experiment=hmr_vit_transformer trainer=gpu launcher=local
Download the evaluation metadata to ./hmr2_evaluation_data/
. Additionally, download the Human3.6M, 3DPW, LSP-Extended, COCO, and PoseTrack dataset images and update the corresponding paths in hmr2/configs/datasets_eval.yaml
.
Run evaluation on multiple datasets as follows, results are stored in results/eval_regression.csv
.
python eval.py --dataset 'H36M-VAL-P2,3DPW-TEST,LSP-EXTENDED,POSETRACK-VAL,COCO-VAL'