Skip to content

Latest commit

 

History

History
executable file
·
52 lines (32 loc) · 2.35 KB

README.md

File metadata and controls

executable file
·
52 lines (32 loc) · 2.35 KB

Rank-Emotion-Cause

This repo contains the code of the following paper:

Effective Inter-Clause Modeling for End-to-End Emotion-Cause Pair Extraction. In Proc. of ACL 2020: The 58th Annual Meeting of the Association for Computational Linguistics, pages 3171--3181. [link]


Results

Experimental results with two different data splits:

  • The first split is 10-fold cross-validation (located in data/split10/), following NUSTM/ECPE.
  • The second split is based on randomly sampling train/validation/test sets with 8:1:1 proportion 20 times (located in data/split20/), following HLT-HITSZ/TransECPE.
Split Emotion-Cause Pair Extraction Emotion Clause Extraction Cause Clause Extraction
10-fold cv F=0.7360, P=0.7119, R=0.7630 F=0.9057, P=0.9123, R=0.8999 F=0.7615, P=0.7461, R=0.7788
8:1:1 (20 times) F=0.6915, P=0.6575, R=0.7305 F=0.8942, P=0.8936, R=0.8948 F=0.7191, P=0.6940, R=0.7471

Requirements

With Anaconda, we can create the environment with the provided environment.yml:

conda env create --file environment.yml 
conda activate EmoCau

The code has been tested on Ubuntu 16.04 using a single GPU. For multi-GPU training, a little extra work may be needed, and please refer to these examples: Hugging Face and CLUEbenchmark.


Quick Start

  1. Clone or download this repo.

  2. Download the pertrained "BERT-Base, Chinese" model from this link. And then put the model file pytorch_model.bin to the folder src/bert-base-chinese.

  3. Run our model RankCP:

python src/main.py