CVPR, 2022
Lixin Yang *
.
Kailin Li *
·
Xinyu Zhan
·
Jun Lv
·
Wenqiang Xu
·
Jiefeng Li
·
Cewu Lu
\star = equal contribution
This repo contains models, train, and test codes.
- installation guideline
- testing code and pretrained models
- generating CCV-space
- training pipeline
Following the Installation Instruction to setup environment, assets, datasets.
Download model checkpoint: 🔗 artiboost_ho3dv2_clasbased_100e.pth.tar to ./checkpoints
.
Then run:
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100
This script yield the (Our Clas + Arti) result in main paper Table 2.
- The object's MPCPE socre is stored in
exp/submit_{cfg}_{time}/evaluations/
. - The HO3Dv2 Codalab submission file will be dumped at:
./exp/submit_{cfg}_{time}/{cfg}_SUBMIT.zip
.
Upload it to the HO3Dv2 Codalab server and wait for the evaluation to finish.
You can also visualize the prediction as the images below:
First, you need install extra packages for rendering. Use pip
to sequentially install:
vtk==9.0.1 PyQt5==5.15.4 PyQt5-Qt5==5.15.2 PyQt5-sip==12.8.1 mayavi==4.7.2
Second, you need to connect a display window (could be a display monitor, TeamViewer, or VNC server) that supports Qt platform plugin "xcb".
Inside the display window, start a new terminal session and append: --postprocess_fit_mesh
and --postprocess_draw
at the end of the shell command,
e.g.
# HO3Dv2, Heatmap-based model, ArtiBoost
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100 \
--postprocess_fit_mesh --postprocess_draw
The rendered qualitative results are stored at exp/submit_{cfg}_{time}/rendered_image/
🔗 artiboost_ho3dv2_regbased_100e.pth.tar
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_regbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
This script yield the (Our Reg + Arti) result in main paper Table 2.
🔗 artiboost_ho3dv3_clasbased_200e.pth.tar
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
This script yield the (Our Clas + Arti) result in main paper Table 5.
Upload HO3Dv3 Codalab submission file to the HO3Dv3 codalab server and wait for the evaluation to finish.
🔗 artiboost_ho3dv3_clasbased_sym_200e.pth.tar
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_sym_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
This script yield the (Ours Clas sym + Arti) result in main paper Table 5.
🔗 artiboost_dexycb_clasbased_sym_100e.pth.tar
$ python train/submit_reload.py --cfg config_eval/eval_dexycb_clasbased_sym_artiboost.yaml --gpu_id 0
This script yield the (Ours Clas sym + Arti) result in main paper Table 4.
@inproceedings{yang2021ArtiBoost,
title={{ArtiBoost}: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis},
author={Yang, Lixin and Li, Kailin and Zhan, Xinyu and Lv, Jun and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}