Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
-
Updated
Jun 16, 2021 - Python
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.
This repo contains source code for the MultiModal Masking (M^3) Interspeech 2021 paper.
Add a description, image, and links to the cmu-mosei topic page so that developers can more easily learn about it.
To associate your repository with the cmu-mosei topic, visit your repo's landing page and select "manage topics."