In this project we make use of Word Sense Disambiguation (Navigli, 2009) to tackle the Word-in-Context (WiC) disambiguation task, proposing two BERT-based models. A first one with a feature-based approach and a second one with a fine-tuning approach, in which we re-implement GlossBERT (Huang et al., 2019).
For further information, you can read the detailed report or take a look at the presentation slides (pages 19-24).
This project has been developed during the A.Y. 2020-2021 for the Natural Language Processing course @ Sapienza University of Rome.
- GlossBERT 15% SemCor (WSD: 60,10 | WiC: 68,04)
- Word-in-Context disambiguation as a binary classification task, experimenting with a word-level approach (MLP + ReLU) and a sequence encoding one (LSTMs), on top of GloVe embeddings
- Aspect-Based Sentiment Analysis (ABSA) using different setups based on 2 stacked BiLSTMs and Attention layers; leveraging PoS, GloVe and BERT (frozen) embeddings