This code base accompanies the repository for generating Co-Speech Gestures in Social Robots, that can be found here. The code in this repository is forked from here, and adapted to better learn 3D representations of 2D frontal poses, including head pose.
The code is implemented in Python 3.7 and PyTorch 1.0
- Create these folders in the repository root: data, models and panoptic_dataset.
- Download one or more samples from the CMU Panoptic Dataset.
- Place the folders from the sample files with the name 'hdPose3d' in the folder 'panoptic_dataset'.
- Run 'generate_dataset.py' to create a pickle containing the preprocessed dataset.
- Run 'train.py', hyperparameters can be set in the file.
- Visualize the results using 'test.py'.
A pre-trained model on 44k samples can be found here. Place this model in the folder 'models', and make sure to change the filename in 'test.py'.
Many thanks to Youngwoo Yoon for providing his code.