This is an official pytorch implementation of Cross Modality Knowledge Distillation between A-mode ultrasound and surface electromyography
- Reproduced neural networks for comparison:
The code is developed using python 3.7 on Ubuntu 18.04. NVIDIA GPU is needed.
The complete hybrid sEMG/AUS dataset is not released now. We apply collected sEMG/AUS data of one subject for code testing, which can be downloaded from: Baidu Disk (code: h99k).
Your directory tree should look like this:
${ROOT}/data
├── EMG
| |—— s1_***_EMG.txt
| |—— s2_***_EMG.txt
| | ...
| └── s8_***_EMG.txt
└── US
|—— s1_***_US.txt
|—— s2_***_US.txt
| ...
└── s8_***_US.txt
- Clone this repo
- Install dependencies:
pip install -r requirements.txt
For training a network on single sEMG or AUS modality, run the script tools/train.py with configuration of model and modality. For instance:
# train network MINDS on sEMG modality
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "MINDS" --modality "EMG"
# train network EUNet on US modality
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "EUNet" --modality "US"
For validating a network on single sEMG or AUS modality, run the script tools/test.py with configuration of model and modality. For instance:
# test network MINDS on sEMG modality
python ./tools/test.py --config "./configs/USEMG_single.yaml" --modelName "MINDS" --modality "EMG"
# test network EUNet on US modality
python ./tools/test.py --config "./configs/USEMG_single.yaml" --modelName "EUNet" --modality "US"
We take MKCNN(US)-distill-MKCNN(EMG) as an example.
# Firstly train MKCNN network on US modality to obtain the teacher network weights
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "MKCNN" --modality "US"
# Secondly apply MKCNN(US) as a teacher, to guide the training of student network MKCNN(EMG)
python ./tools/train_cmkd.py --config configs/USEMG_cmkd.yaml --model_us MKCNN --model_emg MKCNN --alpha 0.8 --T 20
Model | sEMG(w.o. KD) | sEMG(w. KD) |
|
---|---|---|---|
Multi-stream CNN | 74.62 ± 6.68 | 75.48 ± 6.9 | 0 (0.0156) |
EUNet | 79.59 ± 6.08 | 81.16 ± 6.11 | 0 (0.0078) |
MKCNN | 82.69 ± 4.94 | 84.59 ± 5.36 | 0 (0.0078) |
XceptionTime | 88.30 ± 4.60 | 89.06 ± 4.82 | 0 (0.0234) |
MINDS (ours) | 89.05 ± 4.71 | 90.06 ± 4.52 | 0 (0.0078) |
The accuracies comparison of sEMG modality with knowledge distillation ("sEMG(w. KD)") and without knowledge distillation ("sEMG(w.o. KD)"). The Wilcoxon signed rank test is applied to verify the significance of the improvement obtained by knowledge distillation. The null hypothesis is rejected when
$H_0 = 0$ ($p< 0.05$ ).
If you find this repository useful for your research, please cite with:
@article{zeng2022cross,
title={Cross Modality Knowledge Distillation Between A-Mode Ultrasound and Surface Electromyography},
author={Zeng, Jia and Sheng, Yixuan and Yang, Yicheng and Zhou, Ziliang and Liu, Honghai},
journal={IEEE Transactions on Instrumentation and Measurement},
volume={71},
pages={1--9},
year={2022},
publisher={IEEE}
},
@inproceedings{zeng2020feature,
title={Feature fusion of sEMG and ultrasound signals in hand gesture recognition},
author={Zeng, Jia and Zhou, Yu and Yang, Yicheng and Wang, Jiaole and Liu, Honghai},
booktitle={2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
pages={3911--3916},
year={2020},
organization={IEEE}
}
If you have any questions, feel free to contact me through jia.zeng@sjtu.edu.cn or Github issues.