By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu.
Smile Lab @ Northeastern University
This repo contains the official code of Skeleton Aware Multi-modal Sign Language Recognition (SAM-SLR) that ranked 1st in CVPR 2021 Challenge: Looking at People Large Scale Signer Independent Isolated Sign Language Recognition.
CVPR21 workshop paper / arXiv preprint / YouTube
Please cite our paper if you find this repo useful in your research.
[2021/10/14] The extended SAM-SLR-v2 paper is available on Arxiv.
[2021/09/22] Processed skeleton data for AUTSL dataset is released here.
[2021/08/26] Results on SLR500 and WLASL2000 datasets are reported.
[2021/06/25] Workshop presentation will be available on YouTube.
[2021/04/10] Our workshop paper has been accepted. Citation info updated.
[2021/03/24] A preprint version of our paper is released here.
[2021/03/20] Our work has been verified and announced by the organizers as the 1st place winner of the challenge!
[2021/03/15] The code is released to public on GitHub.
[2021/03/11] Our team (smilelab2021) ranked 1st in both tracks and here are the links to the leaderboards:
Download AUTSL Dataset.
We processed the dataset into six modalities in total: skeleton, skeleton features, rgb frames, flow color, hha and flow depth.
- Please put original train, val, test videos in data folder as
data
├── train
│ ├── signer0_sample1_color.mp4
│ ├── signer0_sample1_depth.mp4
│ ├── signer0_sample2_color.mp4
│ ├── signer0_sample2_depth.mp4
│ └── ...
├── val
│ └── ...
└── test
└── ...
-
Follow the data-prepare/readme.md to process the data.
-
Use TPose/data_process to extract wholebody pose features.
-
Turkish and English meanings of the class IDs can be found here.
The code is written using Anaconda Python >= 3.6 and Pytorch 1.7 with OpenCV.
Detailed enviroment requirment can be found in requirement.txt in each code folder.
For convenience, we provide a Nvidia docker image to run our code.
We provide pretrained models for all modalities to reproduce our submitted results. Please download them at and put them into corresponding folders.
To test our pretrained model, please put them under each code folders and run the test code as instructed below. To ensemble the tested results and reproduce our final submission. Please copy all the results .pkl files to ensemble/ and follow the instruction to ensemble our final outputs.
For a step-by-step instruction, please see reproduce.md.
Skeleton modality can be trained, finetuned and tested using the code in SL-GCN/ folder. Please follow the SL-GCN/readme.md instruction to prepare skeleton data into four streams (joint, bone, joint_motion, bone motion).
Basic usage:
python main.py --config /path/to/config/file
To train, finetune and test our models, please change the config path to corresponding config files. Detailed instruction can be found in SL-GCN/readme.md
For the skeleton feature, we propose a Separable Spatial-Temporal Convolution Network (SSTCN) to capture spatio-temporal information from those features.
Please follow the instruction in SSTCN/readme.txt to prepare the data, train and test the model.
The RGB frames modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.
python Sign_Isolated_Conv3D_clip.py
python Sign_Isolated_Conv3D_clip_finetune.py
python Sign_Isolated_Conv3D_clip_test.py
Detailed instruction can be found in Conv3D/readme.md
The RGB optical flow modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.
python Sign_Isolated_Conv3D_flow_clip.py
python Sign_Isolated_Conv3D_flow_clip_funtine.py
python Sign_Isolated_Conv3D_flow_clip_test.py
Detailed instruction can be found in Conv3D/readme.md
The Depth HHA modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.
python Sign_Isolated_Conv3D_hha_clip_mask.py
python Sign_Isolated_Conv3D_hha_clip_mask_finetune.py
python Sign_Isolated_Conv3D_hha_clip_mask_test.py
Detailed instruction can be found in Conv3D/readme.md
The Depth Flow modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.
python Sign_Isolated_Conv3D_depth_flow_clip.py
python Sign_Isolated_Conv3D_depth_flow_clip_finetune.py
python Sign_Isolated_Conv3D_depth_flow_clip_test.py
Detailed instruction can be found in Conv3D/readme.md
For both RGB and RGBD track, the tested results of all modalities need to be ensemble together to generate the final results.
-
For RGB track, we use the results from skeleton, skeleton feature, rgb, and flow color modalities to ensemble the final results.
a. Test the model using newly trained weights or provided pretrained weights.
b. Copy all the test results to ensemble folder and rename them as their modality names.
c. Ensemble SL-GCN results from joint, bone, joint motion, bone motion streams in gcn/ .
python ensemble_wo_val.py; python ensemble_finetune.py
c. Copy test_gcn_w_val_finetune.pkl to ensemble/. Copy RGB, TPose and optical flow results to ensemble/. Ensemble final prediction.
python ensemble_multimodal_rgb.py
Final predictions are saved in predictions.csv
-
For RGBD track, we use the results from skeleton, skeleton feature, rgb, flow color, hha and flow depth modalities to ensemble the final results. a. copy hha and flow depth modalities to ensemble/ folder, then
python ensemble_multimodal_rgb.py
To reproduce our results in CVPR21Challenge, we provide .pkl files to ensemble and obtain our final submitted predictions. Detailed instruction can be find in ensemble/readme.md
Licensed under the Creative Commons Zero v1.0 Universal license with the following exceptions:
- The code is released for academic research use only. Commercial use is prohibited.
- Published versions (changed or unchanged) must include a reference to the origin of the code.
If you find this project useful in your research, please cite our paper
% SAM-SLR
@inproceedings{jiang2021skeleton,
title={Skeleton Aware Multi-modal Sign Language Recognition},
author={Jiang, Songyao and Sun, Bin and Wang, Lichen and Bai, Yue and Li, Kunpeng and Fu, Yun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2021}
}
% SAM-SLR-v2
@article{jiang2021sign,
title={Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble},
author={Jiang, Songyao and Sun, Bin and Wang, Lichen and Bai, Yue and Li, Kunpeng and Fu, Yun},
journal={arXiv preprint arXiv:2110.06161},
year={2021}
}
https://github.com/Sun1992/SSTCN-for-SLR
https://github.com/jin-s13/COCO-WholeBody
https://github.com/open-mmlab/mmpose
https://github.com/kchengiva/DecoupleGCN-DropGraph