From aa3fe25bda341eee04ac5d505a84a6bf8f271fa2 Mon Sep 17 00:00:00 2001
From: acl-pwc-bot <94475230+acl-pwc-bot@users.noreply.github.com>
Date: Thu, 13 Jul 2023 03:46:39 +0200
Subject: [PATCH 1/2] Update metadata from Papers with Code
---
data/xml/2020.aacl.xml | 2 +-
data/xml/2020.emnlp.xml | 3 ++-
data/xml/2020.findings.xml | 1 +
data/xml/2021.acl.xml | 1 +
data/xml/2021.findings.xml | 1 +
data/xml/2021.naacl.xml | 1 +
data/xml/2022.findings.xml | 1 +
data/xml/2022.lrec.xml | 2 +-
data/xml/2022.naacl.xml | 1 +
data/xml/2022.sdp.xml | 1 +
data/xml/2023.acl.xml | 3 +++
data/xml/D18.xml | 1 +
data/xml/D19.xml | 1 +
data/xml/N19.xml | 2 ++
data/xml/P19.xml | 1 +
data/xml/W19.xml | 3 ++-
16 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/data/xml/2020.aacl.xml b/data/xml/2020.aacl.xml
index 395dc1fc60..e1d3800ffe 100644
--- a/data/xml/2020.aacl.xml
+++ b/data/xml/2020.aacl.xml
@@ -1549,7 +1549,7 @@
We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows fairseq’s careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based as well as Transformer-based models and open-source detailed training recipes. Fairseq’s machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. Fairseq S2T is available at https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text.
2020.aacl-demo.6
wang-etal-2020-fairseq
- pytorch/fairseq
+ pytorch/fairseq
LibriSpeech
MuST-C
diff --git a/data/xml/2020.emnlp.xml b/data/xml/2020.emnlp.xml
index cdffea717b..8f1296d816 100644
--- a/data/xml/2020.emnlp.xml
+++ b/data/xml/2020.emnlp.xml
@@ -5911,6 +5911,7 @@
10.18653/v1/2020.emnlp-main.392
drozdov-etal-2020-unsupervised
+ PTB Diagnostic ECG Database
Penn Treebank
@@ -7513,7 +7514,7 @@
10.18653/v1/2020.emnlp-main.498
garg-ramakrishnan-2020-bae
- QData/TextAttack
+ QData/TextAttack
IMDB-BINARY
MPQA Opinion Corpus
MR
diff --git a/data/xml/2020.findings.xml b/data/xml/2020.findings.xml
index eec3d161d5..44b0f0e556 100644
--- a/data/xml/2020.findings.xml
+++ b/data/xml/2020.findings.xml
@@ -4223,6 +4223,7 @@
2020.findings-emnlp.285
10.18653/v1/2020.findings-emnlp.285
wang-etal-2020-integrating-task
+ raywangwr/bert_label_embedding
Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
diff --git a/data/xml/2021.acl.xml b/data/xml/2021.acl.xml
index f9a3100b3f..45571b378a 100644
--- a/data/xml/2021.acl.xml
+++ b/data/xml/2021.acl.xml
@@ -3401,6 +3401,7 @@
yang-etal-2021-neural
sustcsonglin/TN-PCFG
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/2021.findings.xml b/data/xml/2021.findings.xml
index 2c42f8f71d..5b75c0daa0 100644
--- a/data/xml/2021.findings.xml
+++ b/data/xml/2021.findings.xml
@@ -4833,6 +4833,7 @@
10.18653/v1/2021.findings-acl.342
merkhofer-etal-2021-perceptual
+ mitre/hpmet
Scaling Within Document Coreference to Long Texts
diff --git a/data/xml/2021.naacl.xml b/data/xml/2021.naacl.xml
index 0efbeabef0..9d754fb9ea 100644
--- a/data/xml/2021.naacl.xml
+++ b/data/xml/2021.naacl.xml
@@ -1830,6 +1830,7 @@
yang-etal-2021-pcfgs
sustcsonglin/TN-PCFG
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/2022.findings.xml b/data/xml/2022.findings.xml
index b6cdd66ce2..19b16a859b 100644
--- a/data/xml/2022.findings.xml
+++ b/data/xml/2022.findings.xml
@@ -1590,6 +1590,7 @@
Nickil21/weakly-supervised-parsing
Chinese Treebank
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/2022.lrec.xml b/data/xml/2022.lrec.xml
index 513b27c29d..f31fab36a4 100644
--- a/data/xml/2022.lrec.xml
+++ b/data/xml/2022.lrec.xml
@@ -1066,6 +1066,7 @@
In this paper, we describe ParCorFull2.0, a parallel corpus annotated with full coreference chains for multiple languages, which is an extension of the existing corpus ParCorFull (Lapshinova-Koltunski et al., 2018). Similar to the previous version, this corpus has been created to address translation of coreference across languages, a phenomenon still challenging for machine translation (MT) and other multilingual natural language processing (NLP) applications. The current version of the corpus that we present here contains not only parallel texts for the language pair English-German, but also for English-French and English-Portuguese, which are all major European languages. The new language pairs belong to the Romance languages. The addition of a new language group creates a need of extension not only in terms of texts added, but also in terms of the annotation guidelines. Both French and Portuguese contain structures not found in English and German. Moreover, Portuguese is a pro-drop language bringing even more systemic differences in the realisation of coreference into our cross-lingual resources. These differences cause problems for multilingual coreference resolution and machine translation. Our parallel corpus with full annotation of coreference will be a valuable resource with a variety of uses not only for NLP applications, but also for contrastive linguists and researchers in translation studies.
2022.lrec-1.85
lapshinova-koltunski-etal-2022-parcorfull2
+ chardmeier/parcor-full
A Multi-Party Dialogue Ressource in French
@@ -8464,7 +8465,6 @@
The social NLP researchers and mental health practitioners have witnessed exponential growth in the field of mental health detection and analysis on social media. It has become important to identify the reason behind mental illness. In this context, we introduce a new dataset for Causal Analysis of Mental health in Social media posts (CAMS). We first introduce the annotation schema for this task of causal analysis. The causal analysis comprises of two types of annotations, viz, causal interpretation and causal categorization. We show the efficacy of our scheme in two ways: (i) crawling and annotating 3155 Reddit data and (ii) re-annotate the publicly available SDCNL dataset of 1896 instances for interpretable causal analysis. We further combine them as CAMS dataset and make it available along with the other source codes https://anonymous.4open.science/r/CAMS1/. Our experimental results show that the hybrid CNN-LSTM model gives the best performance over CAMS dataset.
2022.lrec-1.686
garg-etal-2022-cams
- drmuskangarg/cams
How Does the Experimental Setting Affect the Conclusions of Neural Encoding Models?
diff --git a/data/xml/2022.naacl.xml b/data/xml/2022.naacl.xml
index b331fd4055..22b5c628e3 100644
--- a/data/xml/2022.naacl.xml
+++ b/data/xml/2022.naacl.xml
@@ -5769,6 +5769,7 @@
10.18653/v1/2022.naacl-main.353
sustcsonglin/TN-PCFG
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/2022.sdp.xml b/data/xml/2022.sdp.xml
index 09a49fd79c..282da1eacf 100644
--- a/data/xml/2022.sdp.xml
+++ b/data/xml/2022.sdp.xml
@@ -224,6 +224,7 @@
We address the named entity omission - the drawback of many current abstractive text summarizers. We suggest a custom pretraining objective to enhance the model’s attention on the named entities in a text. At first, the named entity recognition model RoBERTa is trained to determine named entities in the text. After that this model is used to mask named entities in the text and the BART model is trained to reconstruct them. Next, BART model is fine-tuned on the summarization task. Our experiments showed that this pretraining approach drastically improves named entity inclusion precision and recall metrics.
2022.sdp-1.17
berezin-batura-2022-named
+ SciERC
Named Entity Recognition Based Automatic Generation of Research Highlights
diff --git a/data/xml/2023.acl.xml b/data/xml/2023.acl.xml
index 9ce0668169..840c9e275a 100644
--- a/data/xml/2023.acl.xml
+++ b/data/xml/2023.acl.xml
@@ -5104,6 +5104,9 @@
We introduce a dataset for evidence/rationale extraction on an extreme multi-label classification task over long medical documents. One such task is Computer-Assisted Coding (CAC) which has improved significantly in recent years, thanks to advances in machine learning technologies. Yet simply predicting a set of final codes for a patient encounter is insufficient as CAC systems are required to provide supporting textual evidence to justify the billing codes. A model able to produce accurate and reliable supporting evidence for each code would be a tremendous benefit. However, a human annotated code evidence corpus is extremely difficult to create because it requires specialized knowledge. In this paper, we introduce MDACE, the first publicly available code evidence dataset, which is built on a subset of the MIMIC-III clinical records. The dataset – annotated by professional medical coders – consists of 302 Inpatient charts with 3,934 evidence spans and 52 Profee charts with 5,563 evidence spans. We implemented several evidence extraction methods based on the EffectiveCAN model (Liu et al., 2021) to establish baseline performance on this dataset. MDACE can be used to evaluate code evidence extraction methods for CAC systems, as well as the accuracy and interpretability of deep learning models for multi-label classification. We believe that the release of MDACE will greatly improve the understanding and application of deep learning technologies for medical coding and document classification.
2023.acl-long.416
cheng-etal-2023-mdace
+ 3mcloud/MDACE
+ Evidence Inference
+ MIMIC-III
Towards Zero-Shot Multilingual Transfer for Code-Switched Responses
diff --git a/data/xml/D18.xml b/data/xml/D18.xml
index 221602f8f1..435d2fad35 100644
--- a/data/xml/D18.xml
+++ b/data/xml/D18.xml
@@ -2179,6 +2179,7 @@
10.18653/v1/D18-1160
he-etal-2018-unsupervised
jxhe/struct-learning-with-flow
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/D19.xml b/data/xml/D19.xml
index ebbe5639b5..da5316f79a 100644
--- a/data/xml/D19.xml
+++ b/data/xml/D19.xml
@@ -5092,6 +5092,7 @@
jiang-etal-2019-improved
jiangyingjunn/i-darts
CoNLL-2003
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/N19.xml b/data/xml/N19.xml
index 50bf847b51..6c7e9d4d3e 100644
--- a/data/xml/N19.xml
+++ b/data/xml/N19.xml
@@ -1580,6 +1580,7 @@
kim-etal-2019-unsupervised
harvardnlp/urnng
Billion Word Benchmark
+ PTB Diagnostic ECG Database
Penn Treebank
@@ -1614,6 +1615,7 @@
drozdov-etal-2019-unsupervised-latent
MultiNLI
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/P19.xml b/data/xml/P19.xml
index 130a39712e..f5f3970dbe 100644
--- a/data/xml/P19.xml
+++ b/data/xml/P19.xml
@@ -3276,6 +3276,7 @@
10.18653/v1/P19-1228
kim-etal-2019-compound
harvardnlp/compound-pcfg
+ PTB Diagnostic ECG Database
Penn Treebank
diff --git a/data/xml/W19.xml b/data/xml/W19.xml
index c5e337bfd7..4f68816532 100644
--- a/data/xml/W19.xml
+++ b/data/xml/W19.xml
@@ -2061,6 +2061,8 @@
10.18653/v1/W19-1803
pavlopoulos-etal-2019-survey
nlpaueb/bio_image_caption
+ IU X-Ray
+ Peir Gross
Revisiting Visual Grounding
@@ -14121,7 +14123,6 @@ One of the references was wrong therefore it is corrected to cite the appropriat
W19-5945
10.18653/v1/W19-5945
keizer-etal-2019-user
- skeizer/madrigal
Dialogue Act Classification in Team Communication for Robot Assisted Disaster Response
From d70a26a506149b44f376b479f48ce7454f114dd6 Mon Sep 17 00:00:00 2001
From: Matt Post
Date: Thu, 13 Jul 2023 07:04:34 -0400
Subject: [PATCH 2/2] ACL 2023 misc corrections and workshops (#2621)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* Added ACL 2023 short front matter
* Changes to ingest_aclpub2.py:
* more robust path handling
* restored attachments
* more robust handling of attachments and YAML files
* Fixed IWSLT affiliations; added missing attachment
* Added many (but not all) ACL 2023 workshops
---------
Co-authored-by: Arne Köhn
---
bin/ingest_aclpub2.py | 230 +++++++----
data/xml/2023.acl.xml | 12 +
data/xml/2023.americasnlp.xml | 276 +++++++++++++
data/xml/2023.cawl.xml | 136 +++++++
data/xml/2023.clinicalnlp.xml | 684 +++++++++++++++++++++++++++++++++
data/xml/2023.dialdoc.xml | 170 ++++++++
data/xml/2023.iwslt.xml | 119 +++---
data/xml/2023.nlrse.xml | 149 +++++++
data/xml/2023.repl4nlp.xml | 307 +++++++++++++++
data/xml/2023.semeval.xml | 4 +
data/xml/2023.sicon.xml | 101 +++++
data/xml/2023.sigmorphon.xml | 306 +++++++++++++++
data/xml/2023.sustainlp.xml | 248 ++++++++++++
data/xml/2023.ws.xml | 8 +
data/yaml/sigs/sigmorphon.yaml | 2 +
data/yaml/venues/cawl.yaml | 2 +
data/yaml/venues/nlrse.yaml | 3 +
data/yaml/venues/sicon.yaml | 2 +
18 files changed, 2626 insertions(+), 133 deletions(-)
create mode 100644 data/xml/2023.americasnlp.xml
create mode 100644 data/xml/2023.cawl.xml
create mode 100644 data/xml/2023.clinicalnlp.xml
create mode 100644 data/xml/2023.dialdoc.xml
create mode 100644 data/xml/2023.nlrse.xml
create mode 100644 data/xml/2023.repl4nlp.xml
create mode 100644 data/xml/2023.sicon.xml
create mode 100644 data/xml/2023.sigmorphon.xml
create mode 100644 data/xml/2023.sustainlp.xml
create mode 100644 data/yaml/venues/cawl.yaml
create mode 100644 data/yaml/venues/nlrse.yaml
create mode 100644 data/yaml/venues/sicon.yaml
diff --git a/bin/ingest_aclpub2.py b/bin/ingest_aclpub2.py
index c31776668b..d192c34808 100755
--- a/bin/ingest_aclpub2.py
+++ b/bin/ingest_aclpub2.py
@@ -46,6 +46,10 @@
#
# Check things over, then commit and push the changes and synchronize the files.
+# TODO:
+# - check for venue YAML, create/complain if non-existent
+# - add verification model to ensure format is correct
+
import click
import yaml
import re
@@ -142,12 +146,19 @@ def parse_conf_yaml(ingestion_dir: str) -> Dict[str, Any]:
cover_subtitle == shortbooktitle
'''
ingestion_dir = Path(ingestion_dir)
- if (ingestion_dir / 'conference_details.yml').exists():
- meta = yaml.safe_load((ingestion_dir / 'conference_details.yml').read_text())
+
+ paths_to_check = [
+ ingestion_dir / 'conference_details.yml',
+ ingestion_dir / 'inputs' / 'conference_details.yml',
+ ]
+ meta = None
+ for path in paths_to_check:
+ if path.exists():
+ meta = yaml.safe_load(path.read_text())
+ break
else:
- meta = yaml.safe_load(
- (ingestion_dir / 'inputs/conference_details.yml').read_text()
- )
+ raise Exception("Can't find conference_details.yml (looked in {paths_to_check})")
+
meta['month'] = meta['start_date'].strftime('%B')
meta['year'] = str(meta['start_date'].year)
@@ -175,12 +186,26 @@ def parse_conf_yaml(ingestion_dir: str) -> Dict[str, Any]:
def parse_paper_yaml(ingestion_dir: str) -> List[Dict[str, str]]:
+ """
+ Reads papers.yml to get metadata. Skips non-archival papers.
+ """
ingestion_dir = Path(ingestion_dir)
- if (ingestion_dir / 'conference_details.yml').exists():
- papers = yaml.safe_load((ingestion_dir / 'papers.yml').read_text())
+ paths_to_check = [
+ ingestion_dir / 'papers.yml',
+ ingestion_dir / 'inputs' / 'papers.yml',
+ ]
+ papers = None
+ for path in paths_to_check:
+ if path.exists():
+ papers = yaml.safe_load(path.read_text())
+ break
else:
- papers = yaml.safe_load((ingestion_dir / 'input/papers.yml').read_text())
+ raise Exception("Can't find papers.yml (looked in root dir and under inputs/)")
+
+ # remove non-archival papers
+ papers = [p for p in papers if p.get('archival', True)]
+
return papers
@@ -194,42 +219,42 @@ def add_paper_nums_in_paper_yaml(
start, end = 1, 0
for paper in papers:
- if 'archival' not in paper.keys():
- paper.update({'archival': '1'})
- assert 'archival' in paper.keys(), f'{paper["id"]} is missing key archival'
assert 'file' in paper.keys(), f'{paper["id"]} is missing key file'
- if (
- paper['archival'] == 1
- or paper['archival'] is True
- or paper['archival'] == '1'
- ):
- paper_id = str(paper['id'])
- # if 'file' not in paper.keys():
- # print(f'{paper_id} does not have file key but archive is {paper["archival"]}')
- # paper_name = paper['title']
- # else:
- paper_path = paper['file']
- paper_need_read_path = None
- # TODO: we should just be able to read paper_path directly, and throw an
- # error if it doesn't exist
- if (path := ingestion_dir / "watermarked_pdfs" / paper_path).exists():
- paper_need_read_path = str(path)
- elif (
- path := ingestion_dir / "watermarked_pdfs" / f"{paper_id}.pdf"
- ).exists():
+ paper_id = str(paper['id'])
+ # if 'file' not in paper.keys():
+ # print(f'{paper_id} does not have file key but archive is {paper["archival"]}')
+ # paper_name = paper['title']
+ # else:
+
+ paper_path = paper['file']
+
+ # TODO: we should just be able to read paper_path directly, and throw an
+ # error if it doesn't exist
+ paper_need_read_path = None
+ paths_to_check = [
+ ingestion_dir / "watermarked_pdfs" / paper_path,
+ ingestion_dir / "watermarked_pdfs" / f"{paper_id}.pdf",
+ ingestion_dir / "build" / "watermarked_pdfs" / paper_path,
+ ingestion_dir / "build" / "watermarked_pdfs" / f"{paper_id}.pdf",
+ ]
+ paper_need_read_path = None
+ for path in paths_to_check:
+ if path.exists():
paper_need_read_path = str(path)
+ break
+ else:
+ raise Exception(
+ f"* Fatal: could not find paper ID {paper_id} ({paths_to_check})"
+ )
- assert (
- paper_need_read_path is not None
- ), f"* Fatal: could not find {paper_id} (path was {paper_path}, {path})"
+ pdf = open(paper_need_read_path, 'rb')
+ pdf_reader = PyPDF2.PdfReader(pdf)
+ num_of_pages = len(pdf_reader.pages)
+ start = end + 1
+ end = start + num_of_pages - 1
+ paper['pages'] = f'{start}-{end}'
- pdf = open(paper_need_read_path, 'rb')
- pdf_reader = PyPDF2.PdfReader(pdf)
- num_of_pages = len(pdf_reader.pages)
- start = end + 1
- end = start + num_of_pages - 1
- paper['pages'] = f'{start}-{end}'
return papers
@@ -342,6 +367,7 @@ def paper2xml(
'semantic_scholar_id',
'username']
'''
+
fields = [
'title',
'author',
@@ -351,7 +377,7 @@ def paper2xml(
'doi',
'language',
]
- paper = make_simple_element('paper', attrib={'id': str(paper_num)})
+ paper = make_simple_element('paper', attrib={"id": str(paper_num)})
for field in fields:
if field == 'author':
authors = paper_item['authors']
@@ -372,15 +398,19 @@ def paper2xml(
if field == 'url':
value = f'{anthology_id}'
elif field == 'abstract':
- value = paper_item['abstract'].replace('\n', '')
+ value = None
+ if "abstract" in paper_item:
+ value = paper_item["abstract"].replace('\n', '')
elif field == 'title':
value = paper_item[field]
elif field == 'pages':
value = paper_item[field]
else:
continue
+
try:
- make_simple_element(field, text=value, parent=paper)
+ if value is not None:
+ make_simple_element(field, text=value, parent=paper)
except Exception:
print(
f"Couldn't process {paper} for {anthology_id}, please check the abstract in the papers.yaml file for this paper",
@@ -450,16 +480,39 @@ def copy_pdf_and_attachment(
venue_name = meta['anthology_venue_id'].lower()
volume_name = meta['volume_name'].lower()
- pdfs_dest_dir = create_dest_path(pdfs_dir, venue_name)
- pdfs_src_dir = os.path.join(meta['path'], 'watermarked_pdfs')
+ pdfs_src_dir = None
+ paths_to_check = [
+ Path(meta['path']) / 'watermarked_pdfs',
+ Path(meta['path']) / 'build' / 'watermarked_pdfs',
+ ]
+ for path in paths_to_check:
+ if path.exists() and path.is_dir():
+ pdfs_src_dir = path
+ break
+ else:
+ raise FileNotFoundError(f"Could not find watermarked PDFs in {paths_to_check}")
+
+ pdfs_dest_dir = Path(create_dest_path(pdfs_dir, venue_name))
# copy proceedings.pdf
- proceedings_pdf_src_path = os.path.join(meta['path'], 'proceedings.pdf')
- proceedings_pdf_dest_path = None
- if os.path.exists(proceedings_pdf_src_path):
- proceedings_pdf_dest_path = (
- os.path.join(pdfs_dest_dir, f"{collection_id}-{volume_name}") + ".pdf"
+ proceedings_pdf_src_path = None
+ paths_to_check = [
+ Path('proceedings.pdf'),
+ Path("build") / 'proceedings.pdf',
+ ]
+ for path in paths_to_check:
+ if path.exists():
+ proceedings_pdf_src_path = str(path)
+ break
+ else:
+ print(
+ f"Warning: could not find proceedings.pdf in {paths_to_check}",
+ file=sys.stderr,
)
+
+ proceedings_pdf_dest_path = None
+ if proceedings_pdf_src_path is not None:
+ proceedings_pdf_dest_path = pdfs_dest_dir / f"{collection_id}-{volume_name}.pdf"
if dry_run:
print(
f'would\'ve moved {proceedings_pdf_src_path} to {proceedings_pdf_dest_path}'
@@ -476,11 +529,24 @@ def copy_pdf_and_attachment(
"attachments": [],
}
- frontmatter_src_path = 'front_matter.pdf'
- if os.path.exists(frontmatter_src_path):
- frontmatter_dest_path = (
- os.path.join(pdfs_dest_dir, f"{collection_id}-{volume_name}") + '.0.pdf'
+ frontmatter_src_path = None
+ paths_to_check = [
+ Path('front_matter.pdf'),
+ Path('0.pdf'),
+ Path("build") / 'front_matter.pdf',
+ Path("build") / '0.pdf',
+ ]
+ for path in paths_to_check:
+ if path.exists():
+ frontmatter_src_path = str(path)
+ break
+ else:
+ print(
+ f"Warning: could not find front matter in {paths_to_check}", file=sys.stderr
)
+
+ if frontmatter_src_path is not None:
+ frontmatter_dest_path = pdfs_dest_dir / f"{collection_id}-{volume_name}.0.pdf"
if dry_run:
print(f'would\'ve moved {frontmatter_src_path} to {frontmatter_dest_path}')
if not dry_run:
@@ -489,6 +555,7 @@ def copy_pdf_and_attachment(
# create the PDF entry so that we'll get
volume[0]['pdf'] = frontmatter_dest_path
+ paper_num = 0
for i, paper in enumerate(papers):
# archival papers only
if 'archival' not in paper.keys():
@@ -509,23 +576,21 @@ def copy_pdf_and_attachment(
# paper_name = paper['file']
if paper_name != '' or paper_name is not None:
paper_id = str(paper['id'])
- paper_num = i + 1
+ paper_num += 1
paper_id_full = f'{collection_id}-{volume_name}.{paper_num}'
pdf_src_path = None
- if os.path.exists(os.path.join(pdfs_src_dir, paper_name)):
- pdf_src_path = os.path.join(pdfs_src_dir, paper_name)
- elif os.path.exists(os.path.join(pdfs_src_dir, f'{paper_id}.pdf')):
- pdf_src_path = os.path.join(pdfs_src_dir, f'{paper_id}.pdf')
+ if (pdfs_src_dir / paper_name).exists():
+ pdf_src_path = pdfs_src_dir / paper_name
+ elif pdfs_src_dir / f'{paper_id}.pdf':
+ pdf_src_path = pdfs_src_dir / f'{paper_id}.pdf'
assert (
pdf_src_path
- ), f"Couldn't find {paper_name}/{paper_id} in {pdfs_src_dir}"
- pdf_dest_path = os.path.join(
- pdfs_dest_dir, f"{collection_id}-{volume_name}.{paper_num}.pdf"
- )
+ ), f"Couldn't find {paper_name} or {paper_id} in {pdfs_src_dir}"
+ pdf_dest_path = pdfs_dest_dir / f"{paper_id_full}.pdf"
if dry_run:
- print(f'would\'ve moved {pdf_src_path} to {pdf_dest_path}')
+ print(f"would've moved {pdf_src_path} to {pdf_dest_path}")
if not dry_run:
maybe_copy(pdf_src_path, pdf_dest_path)
@@ -536,22 +601,35 @@ def copy_pdf_and_attachment(
}
# copy attachments
- # TODO: skipping attachments because full of non-publishable stuff
- if False and 'attachments' in paper:
+ if 'attachments' in paper:
attachs_dest_dir = create_dest_path(attachments_dir, venue_name)
- attachs_src_dir = os.path.join(meta['path'], 'attachments')
- assert os.path.exists(
- attachs_src_dir
- ), f'paper {i, paper_name} contains attachments but attachments folder was not found'
+ attachs_src_dir = Path(meta['path']) / 'attachments'
+ # assert (
+ # attachs_src_dir.exists()
+ # ), f'paper {i, paper_name} contains attachments but attachments folder was not found'
for attachment in paper['attachments']:
- print("ATTACH", paper_id_full, attachment)
- file_path = attachment.get('file', None)
+ file_path = Path(attachment.get('file', None))
if file_path is None:
continue
- attach_src_path = attachs_src_dir + '/' + file_path
- attach_src_extension = attach_src_path.split(".")[-1]
+ attach_src_path = None
+ paths_to_check = [
+ attachs_src_dir / file_path,
+ attachs_src_dir / file_path.name,
+ ]
+ for path in paths_to_check:
+ if path.exists():
+ attach_src_path = str(path)
+ break
+ else:
+ print(
+ f"Warning: paper {paper_id} attachment {file_path} not found, skipping",
+ file=sys.stderr,
+ )
+ continue
+
+ attach_src_extension = attach_src_path.split(".")[-1]
type_ = attachment['type'].replace(" ", "")
file_name = f'{collection_id}-{volume_name}.{paper_num}.{type_}.{attach_src_extension}'
@@ -567,6 +645,7 @@ def copy_pdf_and_attachment(
)
else:
maybe_copy(attach_src_path, attach_dest_path)
+ print(f"Attaching {attach_dest_path}/{type_} to {paper_num}")
volume[paper_num]['attachments'].append(
(attach_dest_path, type_)
)
@@ -767,10 +846,13 @@ def main(ingestion_dir, pdfs_dir, attachments_dir, dry_run, anthology_dir, inges
volume_full_id, meta = process_proceeding(
ingestion_dir, anthology_datadir, venue_index, venue_keys
)
+
+ # Load the papers.yaml file, skipping non-archival papers
papers = parse_paper_yaml(ingestion_dir)
# print(f'original paper {papers[0]}')
+
+ # add page numbering by parsing the PDFs
papers = add_paper_nums_in_paper_yaml(papers, ingestion_dir)
- # print(f'updated paper {papers[0]}')
(
volume,
diff --git a/data/xml/2023.acl.xml b/data/xml/2023.acl.xml
index 840c9e275a..4cdd00adec 100644
--- a/data/xml/2023.acl.xml
+++ b/data/xml/2023.acl.xml
@@ -11273,6 +11273,10 @@
2023
acl
+
+ 2023.acl-short.0
+ acl-2023-short-frontmatter
+
Should you marginalize over possible tokenizations?
NadezhdaChirkovaNaver Labs Europe
@@ -15383,13 +15387,21 @@
2023.findings-acl
+ 2023.americasnlp-1
2023.bea-1
2023.bionlp-1
+ 2023.cawl-1
+ 2023.clinicalnlp-1
2023.codi-1
+ 2023.dialdoc-1
2023.disrpt-1
2023.iwslt-1
2023.law-1
+ 2023.nlrse-1
+ 2023.repl4nlp-1
+ 2023.sicon-1
2023.semeval-1
+ 2023.sustainlp-1
2023.woah-1
2023.wnu-1
diff --git a/data/xml/2023.americasnlp.xml b/data/xml/2023.americasnlp.xml
new file mode 100644
index 0000000000..d6e8fa468d
--- /dev/null
+++ b/data/xml/2023.americasnlp.xml
@@ -0,0 +1,276 @@
+
+
+
+
+ Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)
+ ManuelMager
+ AbteenEbrahimi
+ ArturoOncevay
+ EnoraRice
+ ShrutiRijhwani
+ AlexisPalmer
+ KatharinaKann
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.americasnlp-1
+ americasnlp
+
+
+ 2023.americasnlp-1.0
+ americasnlp-2023-natural
+
+
+ Use of NLP in the Context of Belief states of Ethnic Minorities in Latin America
+ OlgaKellertUniversity of Gttingen
+ MahmudZamanUniversity of Gttingen
+ 1-5
+ The major goal of our study is to test methodsin NLP in the domain of health care educationrelated to Covid-19 of vulnerable groups suchas indigenous people from Latin America. Inorder to achieve this goal, we asked participantsin a survey questionnaire to provide answersabout health related topics. We used these answersto measure the health education status ofour participants. In this paper, we summarizethe results from our NLP-application on theparticipants’ answers. In the first experiment,we use embeddings-based tools to measure thesemantic similarity between participants’ answersand “expert” or “reference” answers. Inthe second experiment, we use synonym-basedmethods to classify answers under topics. Wecompare the results from both experiments withhuman annotations. Our results show that thetested NLP-methods reach a significantly loweraccuracy score than human annotations in bothexperiments. We explain this difference by theassumption that human annotators are muchbetter in pragmatic inferencing necessary toclassify the semantic similarity and topic classificationof answers.
+ 2023.americasnlp-1.1
+ kellert-zaman-2023-use
+
+
+ Neural Machine Translation through Active Learning on low-resource languages: The case of Spanish to Mapudungun
+ BegoaPendasPontificia Universidad Catolica de Chile
+ AndresCarvalloCENIA
+ CarlosAspillagaCENIA
+ 6-11
+ Active learning is an algorithmic approach that strategically selects a subset of examples for labeling, with the goal of reducing workload and required resources. Previous research has applied active learning to Neural Machine Translation (NMT) for high-resource or well-represented languages, achieving significant reductions in manual labor. In this study, we explore the application of active learning for NMT in the context of Mapudungun, a low-resource language spoken by the Mapuche community in South America. Mapudungun was chosen due to the limited number of fluent speakers and the pressing need to provide access to content predominantly available in widely represented languages. We assess both model-dependent and model-agnostic active learning strategies for NMT between Spanish and Mapudungun in both directions, demonstrating that we can achieve over 40% reduction in manual translation workload in both cases.
+ 2023.americasnlp-1.2
+ pendas-etal-2023-neural
+
+
+ Understanding Native Language Identification for Brazilian Indigenous Languages
+ PauloCavalinIBM Research - Brazil
+ PedroDominguesIBM Research Brazil
+ JulioNogimaIBM Research - Brazil
+ ClaudioPinhanezIBM Research
+ 12-18
+ We investigate native language identification (LangID) for Brazilian Indigenous Languages (BILs), using the Bible as training data. Our research extends from previous work, by presenting two analyses on the generalization of Bible-based LangID in non-biblical data. First, with newly collected non-biblical datasets, we show that such a LangID can still provide quite reasonable accuracy in languages for which there are more established writing standards, such as Guarani Mbya and Kaigang, but there can be a quite drastic drop in accuracy depending on the language. Then, we applied the LangID on a large set of texts, about 13M sentences from the Portuguese Wikipedia, towards understanding the difficulty factors may come out of such task in practice. The main outcome is that the lack of handling other American indigenous languages can affect considerably the precision for BILs, suggesting the need of a joint effort with related languages from the Americas.
+ 2023.americasnlp-1.3
+ cavalin-etal-2023-understanding
+
+
+ Codex to corpus: Exploring annotation and processing for an open and extensible machine-readable edition of the Florentine Codex
+ FrancisTyersIndiana University
+ RobertPughIndiana University
+ ValeryBerthoud F.Humboldt-Universitt zu Berlin
+ 19-29
+ This paper describes an ongoing effort to create, from the original hand-written text, a machine-readable, linguistically-annotated, and easily-searchable corpus of the Nahuatl portion of the Florentine Codex, a 16th century Mesoamerican manuscript written in Nahuatl and Spanish. The Codex consists of 12 books and over 300,000 tokens. We describe the process of annotating 3 of these books, the steps of text preprocessing undertaken, our approach to efficient manual processing and annotation, and some of the challenges faced along the way. We also report on a set of experiments evaluating our ability to automate the text processing tasks to aid in the remaining annotation effort, and find the results promising despite the relatively low volume of training data. Finally, we briefly present a real use case from the humanities that would benefit from the searchable, linguistically annotated corpus we describe.
+ 2023.americasnlp-1.4
+ tyers-etal-2023-codex
+
+
+ Developing finite-state language technology for Maya
+ RobertPughIndiana University
+ FrancisTyersIndiana University
+ QuetzilCastaedaIndiana University
+ 30-39
+ We describe a suite of finite-state language technologies for Maya, a Mayan language spoken in Mexico. At the core is a computational model of Maya morphology and phonology using a finite-state transducer. This model results in a morphological analyzer and a morphologically-informed spell-checker. All of these technologies are designed for use as both a pedagogical reading/writing aid for L2 learners and as a general language processing tool capable of supporting much of the natural variation in written Maya. We discuss the relevant features of Maya morphosyntax and orthography, and then outline the implementation details of the analyzer. To conclude, we present a longer-term vision for these tools and their use by both native speakers and learners.
+ 2023.americasnlp-1.5
+ pugh-etal-2023-developing
+
+
+ Modelling the Reduplicating Lushootseed Morphology with an FST and LSTM
+ JackRueterUniversity of Helsinki, Digital Humanities
+ MikaHmlinenRootroo Ltd
+ KhalidAlnajjarUniversity of Helsinki
+ 40-46
+ In this paper, we present an FST based approach for conducting morphological analysis, lemmatization and generation of Lushootseed words. Furthermore, we use the FST to generate training data for an LSTM based neural model and train this model to do morphological analysis. The neural model reaches a 71.9% accuracy on the test data. Furthermore, we discuss reduplication types in the Lushootseed language forms. The approach involves the use of both attested instances of reduplication and bare stems for applying a variety of reduplications to, as it is unclear just how much variation can be attributed to the individual speakers and authors of the source materials. That is, there may be areal factors that can be aligned with certain types of reduplication and their frequencies.
+ 2023.americasnlp-1.6
+ rueter-etal-2023-modelling
+
+
+ Fine-tuning Sentence-RoBERTa to Construct Word Embeddings for Low-resource Languages from Bilingual Dictionaries
+ DiegoBearUniversity of New Brunswick
+ PaulCookUniversity of New Brunswick
+ 47-57
+ Conventional approaches to learning word embeddings (Mikolov et al., 2013; Pennington et al., 2014) are limited to relatively few languages with sufficiently large training corpora. To address this limitation, we propose an alternative approach to deriving word embeddings for Wolastoqey and Mi’kmaq that leverages definitions from a bilingual dictionary. More specifically, following Bear and Cook (2022), we experiment with encoding English definitions of Wolastoqey and Mi’kmaq words into vector representations using English sequence representation models. For this, we consider using and finetuning sentence-RoBERTa models (Reimers and Gurevych, 2019). We evaluate our word embeddings using a similar methodology to that of Bear and Cook using evaluations based on word classification, clustering and reverse dictionary search. We additionally construct word embeddings for higher-resource languages English, German and Spanishusing our methods and evaluate our embeddings on existing word-similarity datasets. Our findings indicate that our word embedding methods can be used to produce meaningful vector representations for low-resource languages such as Wolastoqey and Mi’kmaq and for higher-resource languages.
+ 2023.americasnlp-1.7
+ bear-cook-2023-fine
+
+
+ Identification of Dialect for Eastern and Southwestern Ojibwe Words Using a Small Corpus
+ KalvinHartwigUnaffiliated
+ EvanLucasMichigan Technological University
+ TimothyHavensMichigan Technological University
+ 58-66
+ The Ojibwe language has several dialects that vary to some degree in both spoken and written form. We present a method of using support vector machines to classify two different dialects (Eastern and Southwestern Ojibwe) using a very small corpus of text. Classification accuracy at the sentence level is 90% across a five-fold cross validation and 72% when the sentence-trained model is applied to a data set of individual words. Our code and the word level data set are released openly on Github at [link to be inserted for final version, working demonstration notebook uploaded with paper].
+ 2023.americasnlp-1.8
+ hartwig-etal-2023-identification
+
+
+ Enriching WayunaikiSpanish Neural Machine Translation with Linguistic Information
+ NoraGraichenUdS
+ JosefVan GenabithDFKI
+ CristinaEspaa-bonetDFKI GmbH
+ 67-83
+ We present the first neural machine translation system for the low-resource language pair WayunaikiSpanish and explore strategies to inject linguistic knowledge into the model to improve translation quality. We explore a wide range of methods and combine complementary approaches. Results indicate that incorporating linguistic information through linguistically motivated subword segmentation, factored models, and pretrained embeddings helps the system to generate improved translations, with the segmentation contributing the most.In order to evaluate translation quality in a general domain and go beyond the available religious domain data, we gather and make publicly available a new test set and supplementary material.Although translation quality as measured with automatic metrics is low, we hope these resources will facilitate and support further research on Wayunaiki.
+ 2023.americasnlp-1.9
+ graichen-etal-2023-enriching
+
+
+ Towards the First Named Entity Recognition of Inuktitut for an Improved Machine Translation
+ Ngoc TanLeUniversite du Quebec a Montreal
+ SoumiaKasdiUniversite du Quebec a Montreal
+ FatihaSadatUQAM
+ 84-93
+ Named Entity Recognition is a crucial step to ensure good quality performance of several Natural Language Processing applications and tools, including machine translation and information retrieval. Moreover, it is considered as a fundamental module of many Natural Language Understanding tasks such as question-answering systems. This paper presents a first study on NER for an under-represented Indigenous Inuit language of Canada, Inuktitut, which lacks linguistic resources and large labeled data.Our proposed NER model for Inuktitut is built by transferring linguistic characteristics from English to Inuktitut, based on either rules or bilingual word embeddings. We provide an empirical study based on a comparison with the state of the art models and as well as intrinsic and extrinsic evaluations. In terms of Recall, Precision and F-score, the obtained results show the effectiveness of the proposed NER methods. Furthermore, it improved the performance of Inuktitut-English Neural Machine Translation.
+ 2023.americasnlp-1.10
+ le-etal-2023-towards
+
+
+ Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and Spanish-Mixtec
+ Atnafu LambeboTonjaInstituto Politcnico Nacional (IPN), Centro de Investigacin en Computacin (CIC)
+ ChristianMaldonado-sifuentesTRAI-L .com
+ David AlejandroMendoza CastilloTRAI-L.com
+ OlgaKolesnikovaInstituto Politecnico Nacional
+ NoCastro-snchezTecNM/Cenidet
+ GrigoriSidorovCIC-IPN
+ AlexanderGelbukhInstituto Politcnico Nacional
+ 94-102
+ In this paper, we present a parallel Spanish- Mazatec and Spanish-Mixtec corpus for machine translation (MT) tasks, where Mazatec and Mixtec are two indigenous Mexican languages. We evaluated the usability of the collected corpus using three different approaches: transformer, transfer learning, and fine-tuning pre-trained multilingual MT models. Fine-tuning the Facebook m2m100-48 model outperformed the other approaches, with BLEU scores of 12.09 and 22.25 for Mazatec-Spanish and Spanish-Mazatec translations, respectively, and 16.75 and 22.15 for Mixtec-Spanish and Spanish-Mixtec translations, respectively. The results indicate that translation performance is influenced by the dataset size (9,799 sentences in Mazatec and 13,235 sentences in Mixtec) and is more effective when indigenous languages are used as target languages. The findings emphasize the importance of creating parallel corpora for indigenous languages and fine-tuning models for low-resource translation tasks. Future research will investigate zero-shot and few-shot learning approaches to further improve translation performance in low-resource settings.
+ 2023.americasnlp-1.11
+ tonja-etal-2023-parallel
+
+
+ A finite-state morphological analyser for Highland Puebla Nahuatl
+ RobertPughIndiana University
+ FrancisTyersIndiana University
+ 103-108
+ This paper describes the development of a free/open-source finite-state morphologicaltransducer for Highland Puebla Nahuatl, a Uto-Aztecan language spoken in and around the stateof Puebla in Mexico. The finite-state toolkit used for the work is the Helsinki Finite-StateToolkit (HFST); we use the lexc formalism for modelling the morphotactics and twol formal-ism for modelling morphophonological alternations. An evaluation is presented which showsthat the transducer has a reasonable coveragearound 90%on freely-available corpora of the language, and high precisionover 95%on a manually verified test set
+ 2023.americasnlp-1.12
+ pugh-tyers-2023-finite
+
+
+ Neural Machine Translation for the Indigenous Languages of the Americas: An Introduction
+ ManuelMagerAmazon AWS
+ RajatBhatnagarUniversity of Colorado Boulder
+ GrahamNeubigCarnegie Mellon University
+ Ngoc ThangVuUniversity of Stuttgart
+ KatharinaKannUniversity of Colorado Boulder
+ 109-133
+ Neural models have drastically advanced state of the art for machine translation (MT) between high-resource languages. Traditionally, these models rely on large amounts of training data, but many language pairs lack these resources. However, an important part of the languages in the world do not have this amount of data. Most languages from the Americas are among them, having a limited amount of parallel and monolingual data, if any. Here, we present an introduction to the interested reader to the basic challenges, concepts, and techniques that involve the creation of MT systems for these languages. Finally, we discuss the recent advances and findings and open questions, product of an increased interest of the NLP community in these languages.
+ 2023.americasnlp-1.13
+ mager-etal-2023-neural
+
+
+ Community consultation and the development of an online Akuzipik-English dictionary
+ BenjaminHuntGeorge Mason University
+ LaneSchwartzUniversity of Alaska Fairbanks
+ SylviaSchreinerGeorge Mason University
+ EmilyChenUniversity of Illinois at Urbana-Champaign
+ 134-143
+ In this paper, we present a new online dictionary of Akuzipik, an Indigenous language of St. Lawrence Island (Alaska) and Chukotka (Russia).We discuss community desires for strengthening language use in the community and in educational settings, and present specific features of an online dictionary designed to serve these community goals.
+ 2023.americasnlp-1.14
+ hunt-etal-2023-community
+
+
+ Finding words that aren’t there: Using word embeddings to improve dictionary search for low-resource languages
+ AnttiArppeUniversity of Alberta
+ AndrewNeitschUniversity of Alberta
+ DanielDacanayUniversity of Alberta
+ JolenePoulinUniversity of Alberta
+ DanielHieberUniversity of Alberta
+ AtticusHarriganUniversity of Alberta
+ 144-155
+ Modern machine learning techniques have produced many impressive results in language technology, but these techniques generally require an amount of training data that is many orders of magnitude greater than what exists for low-resource languages in general, and endangered ones in particular. However, dictionary definitions in a comparatively much more well-resourced majority language can provide a link between low-resource languages and machine learning models trained on massive amounts of majority-language data. By leveraging a pre-trained English word embedding to compute sentence embeddings for definitions in bilingual dictionaries for four Indigenous languages spoken in North America, Plains Cree (nhiyawwin), Arapaho (Hinno’itit), Northern Haida (Xaad Kl), and Tsuut’ina (Tst’n), we have obtained promising results for dictionary search. Not only are the search results in the majority language of the definitions more relevant, but they can be semantically relevant in ways not achievable with classic information retrieval techniques: users can perform successful searches for words that do not occur at all in the dictionary. These techniques are directly applicable to any bilingual dictionary providing translations between a high- and low-resource language.
+ 2023.americasnlp-1.15
+ arppe-etal-2023-finding
+
+
+ Enhancing Spanish-Quechua Machine Translation with Pre-Trained Models and Diverse Data Sources: LCT-EHU at AmericasNLP Shared Task
+ NoumanAhmedUniversity of the Basque Country
+ NataliaFlechas ManriqueUniversity of the Basque Country
+ AntonijePetroviUniversity of the Basque Country
+ 156-162
+ We present the LCT-EHU submission to the AmericasNLP 2023 low-resource machine translation shared task. We focus on the Spanish-Quechua language pair and explore the usage of different approaches: (1) Obtain new parallel corpora from the literature and legal domains, (2) Compare a high-resource Spanish-English pre-trained MT model with a Spanish-Finnish pre-trained model (with Finnish being chosen as a target language due to its morphological similarity to Quechua), and (3) Explore additional techniques such as copied corpus and back-translation. Overall, we show that the Spanish-Finnish pre-trained model outperforms other setups, while low-quality synthetic data reduces the performance.
+ 2023.americasnlp-1.16
+ ahmed-etal-2023-enhancing
+
+
+ ChatGPT is not a good indigenous translator
+ DavidStapUniversity of Amsterdam
+ AliAraabiUniversity of Amsterdam
+ 163-167
+ This report investigates the continuous challenges of Machine Translation (MT) systems on indigenous and extremely low-resource language pairs. Despite the notable achievements of Large Language Models (LLMs) that excel in various tasks, their applicability to low-resource languages remains questionable. In this study, we leveraged the AmericasNLP competition to evaluate the translation performance of different systems for Spanish to 11 indigenous languages from South America. Our team, LTLAmsterdam, submitted a total of four systems including GPT-4, a bilingual model, fine-tuned M2M100, and a combination of fine-tuned M2M100 with $k$NN-MT. We found that even large language models like GPT-4 are not well-suited for extremely low-resource languages. Our results suggest that fine-tuning M2M100 models can offer significantly better performance for extremely low-resource translation.
+ 2023.americasnlp-1.17
+ stap-araabi-2023-chatgpt
+
+
+ Few-shot Spanish-Aymara Machine Translation Using English-Aymara Lexicon
+ LilingTanAmazon
+ 168-172
+ 2023.americasnlp-1.18
+ This paper presents the experiments to train a Spanish-Aymara machine translation model for the AmericasNLP 2023 Machine Translation shared task. We included the English-Aymara GlobalVoices corpus and an English-Aymara lexicon to train the model and limit our training resources to train the model in a \textit{few-shot} manner.
+ tan-2023-shot
+
+
+ PlayGround Low Resource Machine Translation System for the 2023 AmericasNLP Shared Task
+ TianruiGuUniversity of California, Santa Barbara
+ KaieChenUniversity of California, Santa Barbara
+ SiqiOuyangUniversity of California, Santa Barbara
+ LeiLiUniversity of California Santa Barbara
+ 173-176
+ This paper presents PlayGround’s submission to the AmericasNLP 2023 shared task on machine translation (MT) into indigenous languages. We finetuned NLLB-600M, a multilingual MT model pre-trained on Flores-200, on 10 low-resource language directions and examined the effectiveness of weight averaging and back translation. Our experiments showed that weight averaging, on average, led to a 0.0169 improvement in the ChrF++ score. Additionally, we found that back translation resulted in a 0.008 improvement in the ChrF++ score.
+ 2023.americasnlp-1.19
+ gu-etal-2023-playground
+
+
+ Four Approaches to Low-Resource Multilingual NMT: The Helsinki Submission to the AmericasNLP 2023 Shared Task
+ OnaDe GibertUniversity of Helsinki
+ RalVzquezUniversity of Helsinki
+ MikkoAulamoUniversity of Helsinki
+ YvesScherrerUniversity of Helsinki
+ SamiVirpiojaUniversity of Helsinki
+ JrgTiedemannUniversity of Helsinki
+ 177-191
+ The Helsinki-NLP team participated in the AmericasNLP 2023 Shared Task with 6 submissions for all 11 language pairs arising from 4 different multilingual systems. We provide a detailed look at the work that went into collecting and preprocessing the data that led to our submissions. We explore various setups for multilingual Neural Machine Translation (NMT), namely knowledge distillation and transfer learning, multilingual NMT including a high-resource language (English), language-specific fine-tuning, and multilingual NMT exclusively using low-resource data. Our multilingual Model B ranks first in 4 out of the 11 language pairs.
+ 2023.americasnlp-1.20
+ de-gibert-etal-2023-four
+
+
+ Sheffield’s Submission to the AmericasNLP Shared Task on Machine Translation into Indigenous Languages
+ EdwardGow-smithUniversity of Sheffield
+ DanaeSnchez VillegasUniversity of Sheffield
+ 192-199
+ The University of Sheffield took part in the shared task 2023 AmericasNLP for all eleven language pairs. Our models consist of training different variations of NLLB-200 model on data provided by the organizers and available data from various sources such as constitutions, handbooks and news articles. Our models outperform the baseline model on the development set on chrF with substantial improvements particularly for Aymara, Guarani and Quechua. On the test set, our best submission achieves the highest average chrF of all the submissions, we rank first in four of the eleven languages, and at least one of our models ranks in the top 3 for all languages.
+ 2023.americasnlp-1.21
+ gow-smith-snchez-villegas-2023-sheffields
+
+
+ Enhancing Translation for Indigenous Languages: Experiments with Multilingual Models
+ Atnafu LambeboTonjaInstituto Politcnico Nacional (IPN), Centro de Investigacin en Computacin (CIC)
+ Hellina HailuNigatuUC Berkeley
+ OlgaKolesnikovaInstituto Politecnico Nacional
+ GrigoriSidorovCIC-IPN
+ AlexanderGelbukhInstituto Politcnico Nacional
+ JugalKalitaUniversity of Colorado
+ 200-205
+ This paper describes CIC NLP’s submission to the AmericasNLP 2023 Shared Task on machine translation systems for indigenous languages of the Americas. We present the system descriptions for three methods. We used two multilingual models, namely M2M-100 and mBART50, and one bilingual (one-to-one) — Helsinki NLP Spanish-English translation model, and experimented with different transfer learning setups. We experimented with 11 languages from America and report the setups we used as well as the results we achieved. Overall, the mBART setup was able to improve upon the baseline for three out of the eleven languages.
+ 2023.americasnlp-1.22
+ tonja-etal-2023-enhancing
+
+
+ Findings of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages
+ AbteenEbrahimiUniversity of Colorado, Boulder
+ ManuelMagerAmazon AWS
+ ShrutiRijhwaniGoogle
+ EnoraRiceUniversity of Colorado Boulder
+ ArturoOncevayThe University of Edinburgh
+ ClaudiaBaltazar
+ MaríaCortés
+ CynthiaMontañoUniversity of California, Berkeley
+ John E.OrtegaNortheastern University
+ RolandoCoto-solanoDartmouth College
+ HilariaCruzUniversity of Louisville
+ AlexisPalmerUniversity of Colorado Boulder
+ KatharinaKannUniversity of Colorado Boulder
+ 206-219
+ In this work, we present the results of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages of the Americas. This edition of the shared task featured eleven language pairs, one of which – Chatino-Spanish – uses a newly collected evaluation dataset, consisting of professionally translated text from the legal domain. Seven teams participated in the shared task, with a total of 181 submissions. Additionally, we conduct a human evaluation of the best system outputs, and compare them to the best submissions from the prior shared task. We find that this analysis agrees with the quantitative measures used to rank submissions, which shows further improvements of 9.64 ChrF on average across all languages, when compared to the prior winning system.
+ 2023.americasnlp-1.23
+ ebrahimi-etal-2023-findings
+
+
+
diff --git a/data/xml/2023.cawl.xml b/data/xml/2023.cawl.xml
new file mode 100644
index 0000000000..f0c905213d
--- /dev/null
+++ b/data/xml/2023.cawl.xml
@@ -0,0 +1,136 @@
+
+
+
+
+ Proceedings of the Workshop on Computation and Written Language (CAWL 2023)
+ KyleGorman
+ RichardSproat
+ BrianRoark
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.cawl-1
+ cawl
+
+
+ 2023.cawl-1.0
+ cawl-2023-computation
+
+
+ Myths about Writing Systems in Speech & Language Technology
+ KyleGormanThe Graduate Center, City University of New York
+ RichardSproatGoogle, Japan
+ 1-5
+ Natural language processing is largely focused on written text processing. However, many computational linguists tacitly endorse myths about the nature of writing. We highlight two of these myths—the conflation of language and writing, and the notion that Chinese, Japanese, and Korean writing is ideographic—and suggest how the community can dispel them.
+ 2023.cawl-1.1
+ gorman-sproat-2023-myths
+
+
+ The Hidden Folk: Linguistic Properties Encoded in Multilingual Contextual Character Representations
+ ManexAgirrezabalUniversity of Copenhagen
+ SidselBoldsenUniversity of Copenhagen
+ NoraHollensteinUniversity of Copenhagen
+ 6-13
+ To gain a better understanding of the linguistic information encoded in character-based language models, we probe the multilingual contextual CANINE model. We design a range of phonetic probing tasks in six Nordic languages, including Faroese as an additional zero-shot instance. We observe that some phonetic information is indeed encoded in the character representations, as consonants and vowels can be well distinguished using a linear classifier. Furthermore, results for the Danish and Norwegian language seem to be worse for the consonant/vowel distinction in comparison to other languages. The information encoded in these representations can also be learned in a zero-shot scenario, as Faroese shows a reasonably good performance in the same vowel/consonant distinction task.
+ 2023.cawl-1.2
+ agirrezabal-etal-2023-hidden
+
+
+ Preserving the Authenticity of Handwritten Learner Language: Annotation Guidelines for Creating Transcripts Retaining Orthographic Features
+ ChristianGoldFernuniversitaet Hagen
+ RonjaLaarmann-quanteRuhr University Bochum
+ TorstenZeschComputational Linguistics, FernUniversität in Hagen
+ 14-21
+ Handwritten texts produced by young learners often contain orthographic features like spelling errors, capitalization errors, punctuation mistakes, and impurities such as strikethrough, inserts, and smudges that are typically normalized or ignored in existing transcriptions. For applications like handwriting recognition with the goal of automatically analyzing a learner’s language performance, however, retaining such features would be necessary.To address this, we present transcription guidelines that retain the features addressed above. Our guidelines were developed iteratively and include numerous example images to illustrate the various issues.On a subset of about 90 double-transcribed texts, we compute inter-annotator agreement and show that our guidelines can be applied with high levels of percentage agreement of about .98.Overall, we transcribed 1,350 learner texts, which is about the same size as the widely adopted handwriting recognition datasets IAM (1,500 pages) and CVL (1,600 pages).Our final corpus can be used to train a handwriting recognition system that transcribes closely to the real productions by young learners.Such a system is a prerequisite for applying automatic orthography feedback systems to handwritten texts in the future.
+ 2023.cawl-1.3
+ gold-etal-2023-preserving
+
+
+ Exploring the Impact of Transliteration on NLP Performance for Low-Resource Languages: The Case of Maltese and Arabic
+ KurtMicallefUniversity of Malta
+ FadhlEryaniUniversity of Tübingen
+ NizarHabashNew York University Abu Dhabi
+ HoudaBouamorCarnegie Mellon University in Qatar
+ ClaudiaBorgUniversity of Malta
+ 22-32
+ Maltese is a low-resource language of Arabic and Romance origins written in Latin script. We explore the impact of transliterating Maltese into Arabic script on a number of downstream tasks. We compare multiple transliteration pipelines ranging from simple one-to-one character maps to more sophisticated alternatives that explore multiple possibilities or make use of manual linguistic annotations. We show that the sophisticated systems are consistently better than simpler systems, quantitatively and qualitatively. We also show transliterating Maltese can be considered as an option to improve the cross-lingual transfer capabilities.
+ 2023.cawl-1.4
+ micallef-etal-2023-exploring
+
+
+ Distinguishing Romanized Hindi from Romanized Urdu
+ ElizabethNielsenUniversity of Edinburgh
+ ChristoKirovGoogle
+ BrianRoarkGoogle Inc.
+ 33-42
+ We examine the task of distinguishing between Hindi and Urdu when those languages are romanized, i.e., written in the Latin script. Both languages are widely informally romanized, and to the extent that they are identified in the Latin script by language identification systems, they are typically conflated. In the absence of large labeled collections of such text, we consider methods for generating training data. Beginning with a small set of seed words, each of which are strongly indicative of one of the languages versus the other, we prompt a pretrained large language model (LLM) to generate romanized text. Treating text generated from an Urdu prompt as one class and text generated from a Hindi prompt as the other class, we build a binary language identification (LangID) classifier. We demonstrate that the resulting classifier distinguishes manually romanized Urdu Wikipedia text from manually romanized Hindi Wikipedia text far better than chance. We use this classifier to estimate the prevalence of Urdu in a large collection of text labeled as romanized Hindi that has been used to train large language models. These techniques can be applied to bootstrap classifiers in other cases where a dataset is known to contain multiple distinct but related classes, such as different dialects of the same language, but for which labels cannot easily be obtained.
+ 2023.cawl-1.5
+ nielsen-etal-2023-distinguishing
+
+
+ Back-Transliteration of English Loanwords in Japanese
+ YuyingRenGraduate Center, City University of New York
+ 43-49
+ We propose methods for transliterating English loanwords in Japanese from their Japanese written form (katakana/romaji) to their original English written form. Our data is a Japanese-English loanwords dictionary that we have created ourselves. We employ two approaches: the direct transliteration, which directly converts words from katakana to English, and the indirect transliteration, which utilizes the English pronunciation as an intermediate step. Additionally, we compare the effectiveness of using katakana versus romaji as input characters. We develop 6 models of 2 types for our experiments: one with an English lexicon-filter, and the other without. For each type, we built 3 models, including a pair n-gram based on WFSTs and two sequence-to-sequence models leveraging LSTM and transformer. Our best performing model was the pair n-gram model with a lexicon-filter, directly transliterating from katakana to English.
+ 2023.cawl-1.6
+ ren-2023-back
+
+
+ Pronunciation Ambiguities in Japanese Kanji
+ WenZhangThe Graduate Center, City University of New York
+ 50-60
+ Japanese writing is a complex system, and a large part of the complexity resides in the use of kanji. A single kanji character in modern Japanese may have multiple pronunciations, either as native vocabulary or as words borrowed from Chinese. This causes a problem for text-to-speech synthesis (TTS) because the system has to predict which pronunciation of each kanji character is appropriate in the context. The problem is called homograph disambiguation. To solve the problem, this research provides a new annotated Japanese single kanji character pronunciation data set and describes an experiment using the logistic regression (LR) classifier. A baseline is computed to compare with the LR classifier accuracy. This experiment provides the first experimental research in Japanese single kanji homograph disambiguation. The annotated Japanese data is freely released to the public to support further work.
+ 2023.cawl-1.7
+ zhang-2023-pronunciation
+
+
+ Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency
+ ShigekiKaritaGoogle
+ RichardSproatGoogle, Japan
+ HarukoIshikawaGoogle, Japan
+ 61-70
+ Word error rate (WER) and character error rate (CER) are standard metrics inSpeech Recognition (ASR), but one problem has always been alternative spellings: If one’s system transcribes adviser whereas the ground truth has advisor, this will count as an error even though the two spellings really represent the same word.Japanese is notorious for “lacking orthography”: most words can be spelled in multiple ways, presenting a problem for accurate ASR evaluation. In this paper we propose a new lenient evaluation metric as a more defensible CER measure for Japanese ASR. We create a lattice of plausible respellings of the reference transcription, using a combination of lexical resources, a Japanese text-processing system, and a neural machine translation model for reconstructing kanji from hiragana or katakana. In amanual evaluation, raters rated 95.4% of the proposed spelling variants as plausible. ASR results show that our method, which does not penalize the system for choosing a valid alternate spelling of a word, affords a 2.4%–3.1% absolute reduction in CER depending on the task.
+ 2023.cawl-1.8
+ karita-etal-2023-lenient
+
+
+ Disambiguating Numeral Sequences to Decipher Ancient Accounting Corpora
+ LoganBornSimon Fraser University
+ M. WillisMonroeUniversity of British Columbia
+ KathrynKelleyUniversità di Bologna
+ AnoopSarkarSimon Fraser University
+ 71-81
+ A numeration system encodes abstract numeric quantities as concrete strings of written characters. The numeration systems used by modern scripts tend to be precise and unambiguous, but this was not so for the ancient and partially-deciphered proto-Elamite (PE) script, where written numerals can have up to four distinct readings depending on the system that is used to read them. We consider the task of disambiguating between these readings in order to determine the values of the numeric quantities recorded in this corpus. We contribute an automated conversion from PE notation to modern Hindu-Arabic notation, as well as two disambiguation techniques based on structural properties of the original documents and classifiers learned with the bootstrapping algorithm. We also contribute a test set for evaluating disambiguation techniques, as well as a novel approach to cautious rule selection for bootstrapped classifiers. Our analysis confirms existing intuitions about this script and reveals previously-unknown correlations between tablet content and numeral magnitude. This work is crucial to understanding and deciphering PE, as the corpus is heavily accounting-focused and contains many more numeric tokens than tokens of text.
+ 2023.cawl-1.9
+ born-etal-2023-disambiguating
+
+
+ Decipherment of Lost Ancient Scripts as Combinatorial Optimisation Using Coupled Simulated Annealing
+ FabioTamburiniFICLIT - University of Bologna
+ 82-91
+ This paper presents a new approach to the ancient scripts decipherment problem based on combinatorial optimisation and coupled simulated annealing. The proposed system is able to produce enhanced results in cognate identification when compared to the state-of-the-art systems on standard evaluation benchmarks used in literature.
+ 2023.cawl-1.10
+ tamburini-2023-decipherment
+
+
+ Learning the Character Inventories of Undeciphered Scripts Using Unsupervised Deep Clustering
+ LoganBornSimon Fraser University
+ M. WillisMonroeUniversity of British Columbia
+ KathrynKelleyUniversità di Bologna
+ AnoopSarkarSimon Fraser University
+ 92-104
+ 2023.cawl-1.11
+ born-etal-2023-learning
+ A crucial step in deciphering a text is to identify what set of characters were used to write it. This requires grouping character tokens according to visual and contextual features, which can be challenging for human analysts when the number of tokens or underlying types is large. Prior work has shown that this process can be automated by clustering dense representations of character images, in a task which we call “script clustering”. In this work, we present novel architectures which exploit varying degrees of contextual and visual information to learn representations for use in script clustering. We evaluate on a range of modern and ancient scripts, and find that our models produce representations which are more effective for script recovery than the current state-of-the-art, despite using just ~2\% as many parameters. Our analysis fruitfully applies these models to assess hypotheses about the character inventory of the partially-deciphered proto-Elamite script.
+
+
+ A Mutual Information-based Approach to Quantifying Logography in Japanese and Sumerian
+ NoahHermalinUniversity of California Berkeley
+ 105-110
+ Writing systems have traditionally been classified by whether they prioritize encoding phonological information (phonographic) versus morphological or semantic information (logographic). Recent work has broached the question of how membership in these categories can be quantified. Sproat and Gutkin (2021) proposed a range of metrics by which degree of logography can be quantified, including mutual information and a metric based on contextual attention required by a sequence-to-sequence RNN that maps pronunciations to spellings. We aim to build on this work by treating a definition of logography which, in contrast to the definition used by Sproat and Gutkin, more directly incorporates morphological identity. We compare mutual information between graphic forms and phonological forms and between graphic forms and morphological identity for written Japanese and Sumerian. Our results suggest that our methods present a promising means of classifying the degree to which a writing system is logographic or phonographic.
+ 2023.cawl-1.12
+ hermalin-2023-mutual
+
+
+
diff --git a/data/xml/2023.clinicalnlp.xml b/data/xml/2023.clinicalnlp.xml
new file mode 100644
index 0000000000..397c1e5fd8
--- /dev/null
+++ b/data/xml/2023.clinicalnlp.xml
@@ -0,0 +1,684 @@
+
+
+
+
+ Proceedings of the 5th Clinical Natural Language Processing Workshop
+ TristanNaumann
+ AsmaBen Abacha
+ StevenBethard
+ KirkRoberts
+ AnnaRumshisky
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.clinicalnlp-1
+ clinicalnlp
+
+
+ Clinical BERTScore: An Improved Measure of Automatic Speech Recognition Performance in Clinical Settings
+ JoelShorGoogle
+ Ruyue AgnesBi
+ SubhashiniVenugopalanGoogle
+ StevenIbaraCornell University
+ RomanGoldenberg
+ EhudRivlinTechnion, Technion
+ 1-7
+ Automatic Speech Recognition (ASR) in medical contexts has the potential to save time, cut costs, increase report accuracy, and reduce physician burnout. However, the healthcare industry has been slower to adopt this technology, in part due to the importance of avoiding medically-relevant transcription mistakes. In this work, we present the Clinical BERTScore (CBERTScore), an ASR metric that penalizes clinically-relevant mistakes more than others. We collect a benchmark of 18 clinician preferences on 149 realistic medical sentences called the Clinician Transcript Preference benchmark (CTP) and make it publicly available for the community to further develop clinically-aware ASR metrics. To our knowledge, this is the first public dataset of its kind. We demonstrate that our metric more closely aligns with clinician preferences on medical sentences as compared to other metrics (WER, BLUE, METEOR, etc), sometimes by wide margins.
+ 2023.clinicalnlp-1.1
+ shor-etal-2023-clinical
+
+
+ Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models
+ HitomiYanakathe University of Tokyo and RIKEN
+ YutaNakamuraThe University of Tokyo
+ YukiChida
+ TomoyaKurosawa
+ 8-18
+ Assessing the capacity of numerical understanding of vision-and-language models over images and texts is crucial for real vision-and-language applications, such as systems for automated medical image analysis.We provide a visual reasoning dataset focusing on numerical understanding in the medical domain.The experiments using our dataset show that current vision-and-language models fail to perform numerical inference in the medical domain.However, the data augmentation with only a small amount of our dataset improves the model performance, while maintaining the performance in the general domain.
+ 2023.clinicalnlp-1.2
+ yanaka-etal-2023-medical
+
+
+ Privacy-Preserving Knowledge Transfer through Partial Parameter Sharing
+ PaulYoussefPhillips-Universität Marburg
+ JörgSchlöttererUniversität Mannheim and Phillips-Universität Marburg
+ ChristinSeifertPhillips-Universität Marburg and University of Twente
+ 19-23
+ Valuable datasets that contain sensitive information are not shared due to privacy and copyright concerns. This hinders progress in many areas and prevents the use of machine learning solutions to solve relevant tasks. One possible solution is sharing models that are trained on such datasets. However, this is also associated with potential privacy risks due to data extraction attacks. In this work, we propose a solution based on sharing parts of the model’s parameters, and using a proxy dataset for complimentary knowledge transfer. Our experiments show encouraging results, and reduced risk to potential training data identification attacks. We present a viable solution to sharing knowledge with data-disadvantaged parties, that do not have the resources to produce high-quality data, with reduced privacy risks to the sharing parties. We make our code publicly available.
+ 2023.clinicalnlp-1.3
+ youssef-etal-2023-privacy
+
+
+ Breaking Barriers: Exploring the Diagnostic Potential of Speech Narratives in Hindi for Alzheimer’s Disease
+ KriteshRauniyar
+ ShuvamShiwakoti
+ SwetaPoudel
+ SurendrabikramThapa
+ UsmanNaseem
+ MehwishNasimUniversity of Western Australia and Flinders University of South Australia
+ 24-30
+ Alzheimer’s Disease (AD) is a neurodegenerative disorder that affects cognitive abilities and memory, especially in older adults. One of the challenges of AD is that it can be difficult to diagnose in its early stages. However, recent research has shown that changes in language, including speech decline and difficulty in processing information, can be important indicators of AD and may help with early detection. Hence, the speech narratives of the patients can be useful in diagnosing the early stages of Alzheimer’s disease. While the previous works have presented the potential of using speech narratives to diagnose AD in high-resource languages, this work explores the possibility of using a low-resourced language, i.e., Hindi language, to diagnose AD. In this paper, we present a dataset specifically for analyzing AD in the Hindi language, along with experimental results using various state-of-the-art algorithms to assess the diagnostic potential of speech narratives in Hindi. Our analysis suggests that speech narratives in the Hindi language have the potential to aid in the diagnosis of AD. Our dataset and code are made publicly available at https://github.com/rkritesh210/DementiaBankHindi.
+ 2023.clinicalnlp-1.4
+ rauniyar-etal-2023-breaking
+
+
+ Investigating Massive Multilingual Pre-Trained Machine Translation Models for Clinical Domain via Transfer Learning
+ LifengHan
+ Gleb
+ IrinaSorokina
+ SergeGladkoffLogrus Global AI Lab
+ GoranNenadicUniversity of Manchester
+ 31-40
+ Massively multilingual pre-trained language models (MMPLMs) are developed in recent years demonstrating superpowers and the pre-knowledge they acquire for downstream tasks.This work investigates whether MMPLMs can be applied to clinical domain machine translation (MT) towards entirely unseen languages via transfer learning.We carry out an experimental investigation using Meta-AI’s MMPLMs “wmt21-dense-24-wide-en-X and X-en (WMT21fb)” which were pre-trained on 7 language pairs and 14 translation directions including English to Czech, German, Hausa, Icelandic, Japanese, Russian, and Chinese, and the opposite direction.We fine-tune these MMPLMs towards English-Spanish language pair which did not exist at all in their original pre-trained corpora both implicitly and explicitly.We prepare carefully aligned clinical domain data for this fine-tuning, which is different from their original mixed domain knowledge.Our experimental result shows that the fine-tuning is very successful using just 250k well-aligned in-domain EN-ES segments for three sub-task translation testings: clinical cases, clinical terms, and ontology concepts. It achieves very close evaluation scores to another MMPLM NLLB from Meta-AI, which included Spanish as a high-resource setting in the pre-training.To the best of our knowledge, this is the first work on using MMPLMs towards clinical domain transfer-learning NMT successfully for totally unseen languages during pre-training.
+ 2023.clinicalnlp-1.5
+ han-etal-2023-investigating
+
+
+ Tracking the Evolution of Covid-19 Symptoms through Clinical Conversations
+ TicianaCoelho Da SilvaUniversidade Federal do Ceará
+ JoséFernandes De Macêdo
+ RégisMagalhãesUniversidade Federal do Ceará
+ 41-47
+ The Coronavirus pandemic has heightened the demand for technological solutions capable of gathering and monitoring data automatically, quickly, and securely. To achieve this need, the Plantão Coronavirus chatbot has been made available to the population of Ceará State in Brazil. This chatbot employs automated symptom detection technology through Natural Language Processing (NLP). The proposal of this work is a symptom tracker, which is a neural network that processes texts and captures symptoms in messages exchanged between citizens of the state and the Plantão Coronavirus nurse/doctor, i.e., clinical conversations. The model has the ability to recognize new patterns and has identified a high incidence of altered psychological behaviors, including anguish, anxiety, and sadness, among users who tested positive or negative for Covid-19. As a result, the tool has emphasized the importance of expanding coverage through community mental health services in the state.
+ 2023.clinicalnlp-1.6
+ coelho-da-silva-etal-2023-tracking
+
+
+ Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning
+ XiangruTangYale University
+ ArmanCohanYale University and Allen Institute for Artificial Intelligence
+ MarkGersteinYale University
+ 48-58
+ In the rapidly evolving landscape of medical research, accurate and concise summarization of clinical studies is crucial to support evidence-based practice. This paper presents a novel approach to clinical studies summarization, leveraging reinforcement learning to enhance factual consistency and align with human annotator preferences. Our work focuses on two tasks: Conclusion Generation and Review Generation. We train a CONFIT summarization model that outperforms GPT-3 and previous state-of-the-art models on the same datasets and collects expert and crowd-worker annotations to evaluate the quality and factual consistency of the generated summaries. These annotations enable us to measure the correlation of various automatic metrics, including modern factual evaluation metrics like QAFactEval, with human-assessed factual consistency. By employing top-correlated metrics as objectives for a reinforcement learning model, we demonstrate improved factuality in generated summaries that are preferred by human annotators.
+ 2023.clinicalnlp-1.7
+ tang-etal-2023-aligning
+
+
+ Navigating Data Scarcity: Pretraining for Medical Utterance Classification
+ Do JuneMin
+ VeronicaPerez-RosasUniversity of Michigan - Ann Arbor
+ RadaMihalceaUniversity of Michigan
+ 59-68
+ Pretrained language models leverage self-supervised learning to use large amounts of unlabeled text for learning contextual representations of sequences. However, in the domain of medical conversations, the availability of large, public datasets is limited due to issues of privacy and data management. In this paper, we study the effectiveness of dialog-aware pretraining objectives and multiphase training in using unlabeled data to improve LMs training for medical utterance classification. The objectives of pretraining for dialog awareness involve tasks that take into account the structure of conversations, including features such as turn-taking and the roles of speakers. The multiphase training process uses unannotated data in a sequence that prioritizes similarities and connections between different domains. We empirically evaluate these methods on conversational dialog classification tasks in the medical and counseling domains, and find that multiphase training can help achieve higher performance than standard pretraining or finetuning.
+ 2023.clinicalnlp-1.8
+ min-etal-2023-navigating
+
+
+ Hindi Chatbot for Supporting Maternal and Child Health Related Queries in Rural India
+ RitwikMishraIndraprastha Institute of Information Technology, Delhi
+ SimranjeetSingh
+ JasmeetKaurIndraprastha Institute of Information Technology, Delhi
+ PushpendraSingh
+ RajivShah
+ 69-77
+ In developing countries like India, doctors and healthcare professionals working in public health spend significant time answering health queries that are fact-based and repetitive. Therefore, we propose an automated way to answer maternal and child health-related queries. A database of Frequently Asked Questions (FAQs) and their corresponding answers generated by experts is curated from rural health workers and young mothers. We develop a Hindi chatbot that identifies k relevant Question and Answer (QnA) pairs from the database in response to a healthcare query (q) written in Devnagri script or Hindi-English (Hinglish) code-mixed script. The curated database covers 80% of all the queries that a user of our study is likely to ask. We experimented with (i) rule-based methods, (ii) sentence embeddings, and (iii) a paraphrasing classifier, to calculate the q-Q similarity. We observed that paraphrasing classifier gives the best result when trained first on an open-domain text and then on the healthcare domain. Our chatbot uses an ensemble of all three approaches. We observed that if a given q can be answered using the database, then our chatbot can provide at least one relevant QnA pair among its top three suggestions for up to 70% of the queries.
+ 2023.clinicalnlp-1.9
+ mishra-etal-2023-hindi
+
+
+ Multi-Task Training with In-Domain Language Models for Diagnostic Reasoning
+ BrihatSharmaUniversity of Wisconsin - Madison
+ YanjunGao
+ TimothyMillerHarvard University
+ MatthewChurpekUniversity of Wisconsin - Madison
+ MajidAfsharUniversity of Wisconsin - Madison
+ DmitriyDligachLoyola University Chicago
+ 78-85
+ Generative artificial intelligence (AI) is a promising direction for augmenting clinical diagnostic decision support and reducing diagnostic errors, a leading contributor to medical errors. To further the development of clinical AI systems, the Diagnostic Reasoning Benchmark (DR.BENCH) was introduced as a comprehensive generative AI framework, comprised of six tasks representing key components in clinical reasoning. We present a comparative analysis of in-domain versus out-of-domain language models as well as multi-task versus single task training with a focus on the problem summarization task in DR.BENCH. We demonstrate that a multi-task, clinically-trained language model outperforms its general domain counterpart by a large margin, establishing a new state-of-the-art performance, with a ROUGE-L score of 28.55. This research underscores the value of domain-specific training for optimizing clinical diagnostic reasoning tasks.
+ 2023.clinicalnlp-1.10
+ sharma-etal-2023-multi
+
+
+ Context-aware Medication Event Extraction from Unstructured Text
+ NoushinSalek Faramarzi, State University of New York at Stony Brook
+ MeetPatel
+ Sai HarikaBandarupally
+ RitwikBanerjeeState University of New York, Stony Brook
+ 86-95
+ Accurately capturing medication history is crucial in delivering high-quality medical care. The extraction of medication events from unstructured clinical notes, however, is challenging because the information is presented in complex narratives. We address this challenge by leveraging the newly released Contextualized Medication Event Dataset (CMED) as part of our participation in the 2022 National NLP Clinical Challenges (n2c2) shared task. Our study evaluates the performance of various pretrained language models in this task. Further, we find that data augmentation coupled with domain-specific training provides notable improvements. With experiments, we also underscore the importance of careful data preprocessing in medical event detection.
+ 2023.clinicalnlp-1.11
+ salek-faramarzi-etal-2023-context
+
+
+ Improving Automatic KCD Coding: Introducing the KoDAK and an Optimized Tokenization Method for Korean Clinical Documents
+ GeunyeongJeongKonkuk University
+ JuohSun
+ SeokwonJeongKangwon National University
+ HyunjinShin
+ HarksooKimKonkuk University
+ 96-101
+ International Classification of Diseases (ICD) coding is the task of assigning a patient’s electronic health records into standardized codes, which is crucial for enhancing medical services and reducing healthcare costs. In Korea, automatic Korean Standard Classification of Diseases (KCD) coding has been hindered by limited resources, differences in ICD systems, and language-specific characteristics. Therefore, we construct the Korean Dataset for Automatic KCD coding (KoDAK) by collecting and preprocessing Korean clinical documents. In addition, we propose a tokenization method optimized for Korean clinical documents. Our experiments show that our proposed method outperforms Korean Medical BERT (KM-BERT) in Macro-F1 performance by 0.14%p while using fewer model parameters, demonstrating its effectiveness in Korean clinical documents.
+ 2023.clinicalnlp-1.12
+ jeong-etal-2023-improving
+
+
+ Who needs context? Classical techniques for Alzheimer’s disease detection
+ BehradTaghibeyglou
+ FrankRudziczDalhousie University
+ 102-107
+ Natural language processing (NLP) has shown great potential for Alzheimer’s disease (AD) detection, particularly due to the adverse effect of AD on spontaneous speech. The current body of literature has directed attention toward context-based models, especially Bidirectional Encoder Representations from Transformers (BERTs), owing to their exceptional abilities to integrate contextual information in a wide range of NLP tasks.This comes at the cost of added model opacity and computational requirements. Taking this into consideration, we propose a Word2Vec-based model for AD detection in 108 age- and sex-matched participants who were asked to describe the Cookie Theft picture. We also investigate the effectiveness of our model by fine-tuning BERT-based sequence classification models, as well as incorporating linguistic features. Our results demonstrate that our lightweight and easy-to-implement model outperforms some of the state-of-the-art models available in the literature, as well as BERT models.
+ 2023.clinicalnlp-1.13
+ taghibeyglou-rudzicz-2023-needs
+
+
+ Knowledge Injection for Disease Names in Logical Inference between Japanese Clinical Texts
+ NatsukiMurakamiOchanomizu Women’s University
+ ManaIshida
+ YutaTakahashiOchanomizu Women’s University
+ HitomiYanakathe University of Tokyo and RIKEN
+ DaisukeBekkiOchanomizu University
+ 108-117
+ In the medical field, there are many clinical texts such as electronic medical records, and research on Japanese natural language processing using these texts has been conducted.One such research involves Recognizing Textual Entailment (RTE) in clinical texts using a semantic analysis and logical inference system, ccg2lambda.However, it is difficult for existing inference systems to correctly determine the entailment relations , if the input sentence contains medical domain specific paraphrases such as disease names.In this study, we propose a method to supplement the equivalence relations of disease names as axioms by identifying candidates for paraphrases that lack in theorem proving.Candidates of paraphrases are identified by using a model for the NER task for disease names and a disease name dictionary.We also construct an inference test set that requires knowledge injection of disease names and evaluate our inference system.Experiments showed that our inference system was able to correctly infer for 106 out of 149 inference test sets.
+ 2023.clinicalnlp-1.14
+ murakami-etal-2023-knowledge
+
+
+ Training Models on Oversampled Data and a Novel Multi-class Annotation Scheme for Dementia Detection
+ NadineAbdelhalimUniversity of Manchester
+ IngyAbdelhalimUniversity of Manchester
+ RizaBatista-NavarroUniversity of Manchester
+ 118-124
+ This work introduces a novel three-class annotation scheme for text-based dementia classification in patients, based on their recorded visit interactions. Multiple models were developed utilising BERT, RoBERTa and DistilBERT. Two approaches were employed to improve the representation of dementia samples: oversampling the underrepresented data points in the original Pitt dataset and combining the Pitt with the Holland and Kempler datasets. The DistilBERT models trained on either an oversampled Pitt dataset or the combined dataset performed best in classifying the dementia class. Specifically, the model trained on the oversampled Pitt dataset and the one trained on the combined dataset obtained state-of-the-art performance with 98.8% overall accuracy and 98.6% macro-averaged F1-score, respectively. The models’ outputs were manually inspected through saliency highlighting, using Local Interpretable Model-agnostic Explanations (LIME), to provide a better understanding of its predictions.
+ 2023.clinicalnlp-1.15
+ abdelhalim-etal-2023-training
+
+
+ Improving the Transferability of Clinical Note Section Classification Models with BERT and Large Language Model Ensembles
+ WeipengZhou
+ MajidAfsharUniversity of Wisconsin - Madison
+ DmitriyDligachLoyola University Chicago
+ YanjunGao
+ TimothyMillerHarvard University
+ 125-130
+ Text in electronic health records is organized into sections, and classifying those sections into section categories is useful for downstream tasks. In this work, we attempt to improve the transferability of section classification models by combining the dataset-specific knowledge in supervised learning models with the world knowledge inside large language models (LLMs). Surprisingly, we find that zero-shot LLMs out-perform supervised BERT-based models applied to out-of-domain data. We also find that their strengths are synergistic, so that a simple ensemble technique leads to additional performance gains.
+ 2023.clinicalnlp-1.16
+ zhou-etal-2023-improving-transferability
+
+
+ Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?
+ MohitaChowdhury
+ ErnestLim
+ AislingHigham
+ RoryMcKinnon
+ NikolettaVentoura
+ YajieHe
+ NickDe Pennington
+ 131-137
+ Recent advances in large language models (LLMs) have generated significant interest in their application across various domains including healthcare. However, there is limited data on their safety and performance in real-world scenarios. This study uses data collected using an autonomous telemedicine clinical assistant. The assistant asks symptom-based questions to elicit patient concerns and allows patients to ask questions about their post-operative recovery. We utilise real-world postoperative questions posed to the assistant by a cohort of 120 patients to examine the safety and appropriateness of responses generated by a recent popular LLM by OpenAI, ChatGPT. We demonstrate that LLMs have the potential to helpfully address routine patient queries following routine surgery. However, important limitations around the safety of today’s models exist which must be considered.
+ 2023.clinicalnlp-1.17
+ chowdhury-etal-2023-large
+
+
+ Large Scale Sequence-to-Sequence Models for Clinical Note Generation from Patient-Doctor Conversations
+ GagandeepSinghNuance Communications
+ YuePanNuance Communications
+ JesusAndres-Ferrer
+ MiguelDel-AguaNuance Communications
+ FrankDiehlNuance Communications
+ JoelPinto
+ PaulVozilaNuance Communications
+ 138-143
+ We present our work on building large scale sequence-to-sequence models for generating clinical note from patient-doctor conversation. This is formulated as an abstractive summarization task for which we use encoder-decoder transformer model with pointer-generator. We discuss various modeling enhancements to this baseline model which include using subword and multiword tokenization scheme, prefixing the targets with a chain-of-clinical-facts, and training with contrastive loss that is defined over various candidate summaries. We also use flash attention during training and query chunked attention during inference to be able to process long input and output sequences and to improve computational efficiency. Experiments are conducted on a dataset containing about 900K encounters from around 1800 healthcare providers covering 27 specialties. The results are broken down into primary care and non-primary care specialties. Consistent accuracy improvements are observed across both of these categories.
+ 2023.clinicalnlp-1.18
+ singh-etal-2023-large
+
+
+ clulab at MEDIQA-Chat 2023: Summarization and classification of medical dialogues
+ Kadir BulutOzler
+ StevenBethardUniversity of Arizona
+ 144-149
+ Clinical Natural Language Processing has been an increasingly popular research area in the NLP community. With the rise of large language models (LLMs) and their impressive abilities in NLP tasks, it is crucial to pay attention to their clinical applications. Sequence to sequence generative approaches with LLMs have been widely used in recent years. To be a part of the research in clinical NLP with recent advances in the field, we participated in task A of MEDIQA-Chat at ACL-ClinicalNLP Workshop 2023. In this paper, we explain our methods and findings as well as our comments on our results and limitations.
+ 2023.clinicalnlp-1.19
+ ozler-bethard-2023-clulab
+
+
+ Leveraging Natural Language Processing and Clinical Notes for Dementia Detection
+ MingLiuDeakin University
+ RichardBeareNA
+ TayaCollyerNA
+ NadineAndrewNA
+ VelandaiSrikanthNA
+ 150-155
+ Early detection and automated classification of dementia has recently gained considerable attention using neuroimaging data and spontaneous speech. In this paper, we validate the possibility of dementia detection with in-hospital clinical notes. We collected 954 patients’ clinical notes from a local hospital and assign dementia/non-dementia labels to those patients based on clinical assessment and telephone interview. Given the labeled dementia data sets, we fine tune a ClinicalBioBERT based on some filtered clinical notes and conducted experiments on both binary and three class dementia classification. Our experiment results show that the fine tuned ClinicalBioBERT achieved satisfied performance on binary classification but failed on three class dementia classification. Further analysis suggests that more human prior knowledge should be considered.
+ 2023.clinicalnlp-1.20
+ liu-etal-2023-leveraging
+
+
+ Automated Orthodontic Diagnosis from a Summary of Medical Findings
+ TakumiOhtsukaEhime University
+ TomoyukiKajiwaraEhime University
+ ChihiroTanikawaNA
+ YuujinShimizuNA
+ HajimeNagaharaOsaka University
+ TakashiNinomiyaEhime University
+ 156-160
+ We propose a method to automate orthodontic diagnosis with natural language processing. It is worthwhile to assist dentists with such technology to prevent errors by inexperienced dentists and to reduce the workload of experienced ones. However, text length and style inconsistencies in medical findings make an automated orthodontic diagnosis with deep-learning models difficult. In this study, we improve the performance of automatic diagnosis utilizing short summaries of medical findings written in a consistent style by experienced dentists. Experimental results on 970 Japanese medical findings show that summarization consistently improves the performance of various machine learning models for automated orthodontic diagnosis. Although BERT is the model that gains the most performance with the proposed method, the convolutional neural network achieved the best performance.
+ 2023.clinicalnlp-1.21
+ ohtsuka-etal-2023-automated
+
+
+ Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios
+ HazalTürkmen
+ OguzDikenelliEge University
+ CenkEraslan
+ MehmetCalliNA
+ SuhaOzbek
+ 161-170
+ Recent advancements in natural language processing (NLP) have been driven by large language models (LLMs), thereby revolutionizing the field. Our study investigates the impact of diverse pre-training strategies on the performance of Turkish clinical language models in a multi-label classification task involving radiology reports, with a focus on overcoming language resource limitations. Additionally, for the first time, we evaluated the simultaneous pre-training approach by utilizing limited clinical task data. We developed four models: TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our results revealed superior performance from BERTurk and TurkRadBERT-task v1, both of which leverage a broad general-domain corpus. Although task-adaptive pre-training is capable of identifying domain-specific patterns, it may be prone to overfitting because of the constraints of the task-specific corpus. Our findings highlight the importance of domain-specific vocabulary during pre-training to improve performance. They also affirmed that a combination of general domain knowledge and task-specific fine-tuning is crucial for optimal performance across various categories. This study offers key insights for future research on pre-training techniques in the clinical domain, particularly for low-resource languages.
+ 2023.clinicalnlp-1.22
+ turkmen-etal-2023-harnessing
+
+
+ A Meta-dataset of German Medical Corpora: Harmonization of Annotations and Cross-corpus NER Evaluation
+ IgnacioLlorca
+ FlorianBorchertHasso Plattner Institute
+ Matthieu-P.Schapranow
+ 171-181
+ Over the last years, an increasing number of publicly available, semantically annotated medical corpora have been released for the German language. While their annotations cover comparable semantic classes, the synergies of such efforts have not been explored, yet. This is due to substantial differences in the data schemas (syntax) and annotated entities (semantics), which hinder the creation of common meta-datasets. For instance, it is unclear whether named entity recognition (NER) taggers trained on one or more of such datasets are useful to detect entities in any of the other datasets. In this work, we create harmonized versions of German medical corpora using the BigBIO framework, and make them available to the community. Using these as a meta-dataset, we perform a series of cross-corpus evaluation experiments on two settings of aligned labels. These consist in fine-tuning various pre-trained Transformers on different combinations of training sets, and testing them against each dataset separately. We find that a) trained NER models generalize poorly, with F1 scores dropping approx. 20 pp. on unseen test data, and b) current pre-trained Transformer models for the German language do not systematically alleviate this issue. However, our results suggest that models benefit from additional training corpora in most cases, even if these belong to different medical fields or text genres.
+ 2023.clinicalnlp-1.23
+ llorca-etal-2023-meta
+
+
+ Uncovering the Potential for a Weakly Supervised End-to-End Model in Recognising Speech from Patient with Post-Stroke Aphasia
+ GiuliaSanguedolce
+ PatrickNaylorImperial College London, Imperial College London
+ FatemehGeranmayehImperial College London
+ 182-190
+ Post-stroke speech and language deficits (aphasia) significantly impact patients’ quality of life. Many with mild symptoms remain undiagnosed, and the majority do not receive the intensive doses of therapy recommended, due to healthcare costs and/or inadequate services. Automatic Speech Recognition (ASR) may help overcome these difficulties by improving diagnostic rates and providing feedback during tailored therapy. However, its performance is often unsatisfactory due to the high variability in speech errors and scarcity of training datasets. This study assessed the performance of Whisper, a recently released end-to-end model, in patients with post-stroke aphasia (PWA). We tuned its hyperparameters to achieve the lowest word error rate (WER) on aphasic speech. WER was significantly higher in PWA compared to age-matched controls (10.3% vs 38.5%, p<0.001). We demonstrated that worse WER was related to the more severe aphasia as measured by expressive (overt naming, and spontaneous speech production) and receptive (written and spoken comprehension) language assessments. Stroke lesion size did not affect the performance of Whisper. Linear mixed models accounting for demographic factors, therapy duration, and time since stroke, confirmed worse Whisper performance with left hemispheric frontal lesions.We discuss the implications of these findings for how future ASR can be improved in PWA.
+ 2023.clinicalnlp-1.24
+ sanguedolce-etal-2023-uncovering
+
+
+ Textual Entailment for Temporal Dependency Graph Parsing
+ JiaruiYao
+ StevenBethardUniversity of Arizona
+ KristinWright-BettnerUniversity of Colorado at Boulder
+ EliGoldner
+ DavidHarris
+ GuerganaSavovaHarvard University
+ 191-199
+ We explore temporal dependency graph (TDG) parsing in the clinical domain. We leverage existing annotations on the THYME dataset to semi-automatically construct a TDG corpus. Then we propose a new natural language inference (NLI) approach to TDG parsing, and evaluate it both on general domain TDGs from wikinews and the newly constructed clinical TDG corpus. We achieve competitive performance on general domain TDGs with a much simpler model than prior work. On the clinical TDGs, our method establishes the first result of TDG parsing on clinical data with 0.79/0.88 micro/macro F1.
+ 2023.clinicalnlp-1.25
+ yao-etal-2023-textual
+
+
+ Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models
+ VarunNairCurai Health
+ ElliotSchumacherCurai Health and Johns Hopkins University
+ AnithaKannanCurai Health
+ 200-217
+ A medical provider’s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing “patient does not have a fever” when a fever is present) can be detrimental to the outcome of care for the patient.This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.
+ 2023.clinicalnlp-1.26
+ nair-etal-2023-generating
+
+
+ Factors Affecting the Performance of Automated Speaker Verification in Alzheimer’s Disease Clinical Trials
+ MalikehEhghaghi
+ MarijaStanojevicWinterLightLabs and Temple University
+ AliAkram
+ JekaterinaNovikovaWinterlight Labs
+ 218-227
+ Detecting duplicate patient participation in clinical trials is a major challenge because repeated patients can undermine the credibility and accuracy of the trial’s findings and result in significant health and financial risks. Developing accurate automated speaker verification (ASV) models is crucial to verify the identity of enrolled individuals and remove duplicates, but the size and quality of data influence ASV performance. However, there has been limited investigation into the factors that can affect ASV capabilities in clinical environments. In this paper, we bridge the gap by conducting analysis of how participant demographic characteristics, audio quality criteria, and severity level of Alzheimer’s disease (AD) impact the performance of ASV utilizing a dataset of speech recordings from 659 participants with varying levels of AD, obtained through multiple speech tasks. Our results indicate that ASV performance: 1) is slightly better on male speakers than on female speakers; 2) degrades for individuals who are above 70 years old; 3) is comparatively better for non-native English speakers than for native English speakers; 4) is negatively affected by clinician interference, noisy background, and unclear participant speech; 5) tends to decrease with an increase in the severity level of AD. Our study finds that voice biometrics raise fairness concerns as certain subgroups exhibit different ASV performances owing to their inherent voice characteristics. Moreover, the performance of ASV is influenced by the quality of speech recordings, which underscores the importance of improving the data collection settings in clinical trials.
+ 2023.clinicalnlp-1.27
+ ehghaghi-etal-2023-factors
+
+
+ Team Cadence at MEDIQA-Chat 2023: Generating, augmenting and summarizing clinical dialogue with large language models
+ AshwynSharmaCadence Solutions
+ DavidFeldman
+ AneeshJain
+ 228-235
+ This paper describes Team Cadence’s winning submission to Task C of the MEDIQA-Chat 2023 shared tasks. We also present the set of methods, including a novel N-pass strategy to summarize a mix of clinical dialogue and an incomplete summarized note, used to complete Task A and Task B, ranking highly on the leaderboard amongst stable and reproducible code submissions. The shared tasks invited participants to summarize, classify and generate patient-doctor conversations. Considering the small volume of training data available, we took a data-augmentation-first approach to the three tasks by focusing on the dialogue generation task, i.e., Task C. It proved effective in improving our models’ performance on Task A and Task B. We also found the BART architecture to be highly versatile, as it formed the base for all our submissions. Finally, based on the results shared by the organizers, we note that Team Cadence was the only team to submit stable and reproducible runs to all three tasks.
+ 2023.clinicalnlp-1.28
+ sharma-etal-2023-team
+
+
+ Method for Designing Semantic Annotation of Sepsis Signs in Clinical Text
+ MelissaYanNorwegian University of Science and Technology
+ LiseGustadNord University and Norwegian University of Science and Technology
+ LiseHøvik
+ ØysteinNytrøNorwegian University of Science and Technology
+ 236-246
+ Annotated clinical text corpora are essential for machine learning studies that model and predict care processes and disease progression. However, few studies describe the necessary experimental design of the annotation guideline and annotation phases. This makes replication, reuse, and adoption challenging.Using clinical questions about sepsis, we designed a semantic annotation guideline to capture sepsis signs from clinical text. The clinical questions aid guideline design, application, and evaluation. Our method incrementally evaluates each change in the guideline by testing the resulting annotated corpus using clinical questions. Additionally, our method uses inter-annotator agreement to judge the annotator compliance and quality of the guideline. We show that the method, combined with controlled design increments, is simple and allows the development and measurable improvement of a purpose-built semantic annotation guideline. We believe that our approach is useful for incremental design of semantic annotation guidelines in general.
+ 2023.clinicalnlp-1.29
+ yan-etal-2023-method
+
+
+ Prompt Discriminative Language Models for Domain Adaptation
+ KemingLu
+ PeterPotashMicrosoft
+ XihuiLinMicrosoft
+ YuwenSun
+ ZihanQian
+ ZhengYuanAlibaba Group
+ TristanNaumannMicrosoft Research
+ TianxiCaiHarvard T.H. Chan School of Public Health
+ JunweiLuHarvard University
+ 247-258
+ Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives. However, the potential of discriminative language models in biomedical tasks remains underexplored.To bridge this gap, we develop BioDLM, a method tailored for biomedical domain adaptation of discriminative language models that incorporates prompt-based continual pretraining and prompt tuning for downstream tasks. BioDLM aims to maximize the potential of discriminative language models in low-resource scenarios by reformulating these tasks as span-level corruption detection, thereby enhancing performance on domain-specific tasks and improving the efficiency of continual pertaining.In this way, BioDLM provides a data-efficient domain adaptation method for discriminative language models, effectively enhancing performance on discriminative tasks within the biomedical domain.
+ 2023.clinicalnlp-1.30
+ lu-etal-2023-prompt
+
+
+ Cross-domain German Medical Named Entity Recognition using a Pre-Trained Language Model and Unified Medical Semantic Types
+ SitingLiangGerman Research Center for AI
+ MareikeHartmann
+ DanielSonntagGerman Research Center for AI and Carl von Ossietzky Universität Oldenburg
+ 259-271
+ Information extraction from clinical text has the potential to facilitate clinical research and personalized clinical care, but annotating large amounts of data for each set of target tasks is prohibitive. We present a German medical Named Entity Recognition (NER) system capable of cross-domain knowledge transferring. The system builds on a pre-trained German language model and a token-level binary classifier, employing semantic types sourced from the Unified Medical Language System (UMLS) as entity labels to identify corresponding entity spans within the input text. To enhance the system’s performance and robustness, we pre-train it using a medical literature corpus that incorporates UMLS semantic term annotations. We evaluate the system’s effectiveness on two German annotated datasets obtained from different clinics in zero- and few-shot settings. The results show that our approach outperforms task-specific Condition Random Fields (CRF) classifiers in terms of accuracy. Our work contributes to developing robust and transparent German medical NER models that can support the extraction of information from various clinical texts.
+ 2023.clinicalnlp-1.31
+ liang-etal-2023-cross
+
+
+ Reducing Knowledge Noise for Improved Semantic Analysis in Biomedical Natural Language Processing Applications
+ UsmanNaseem
+ SurendrabikramThapa
+ QiZhang
+ LiangHuTongji University
+ AnumMasood
+ MehwishNasimUniversity of Western Australia and Flinders University of South Australia
+ 272-277
+ Graph-based techniques have gained traction for representing and analyzing data in various natural language processing (NLP) tasks. Knowledge graph-based language representation models have shown promising results in leveraging domain-specific knowledge for NLP tasks, particularly in the biomedical NLP field. However, such models have limitations, including knowledge noise and neglect of contextual relationships, leading to potential semantic errors and reduced accuracy. To address these issues, this paper proposes two novel methods. The first method combines knowledge graph-based language model with nearest-neighbor models to incorporate semantic and category information from neighboring instances. The second method involves integrating knowledge graph-based language model with graph neural networks (GNNs) to leverage feature information from neighboring nodes in the graph. Experiments on relation extraction (RE) and classification tasks in English and Chinese language datasets demonstrate significant performance improvements with both methods, highlighting their potential for enhancing the performance of language models and improving NLP applications in the biomedical domain.
+ 2023.clinicalnlp-1.32
+ naseem-etal-2023-reducing
+
+
+ Medical knowledge-enhanced prompt learning for diagnosis classification from clinical text
+ YuxingLu
+ XukaiZhao
+ JinzhuoWangPeking University
+ 278-288
+ Artificial intelligence based diagnosis systems have emerged as powerful tools to reform traditional medical care. Each clinician now wants to have his own intelligent diagnostic partner to expand the range of services he can provide. When reading a clinical note, experts make inferences with relevant knowledge. However, medical knowledge appears to be heterogeneous, including structured and unstructured knowledge. Existing approaches are incapable of uniforming them well. Besides, the descriptions of clinical findings in clinical notes, which are reasoned to diagnosis, vary a lot for different diseases or patients. To address these problems, we propose a Medical Knowledge-enhanced Prompt Learning (MedKPL) model for diagnosis classification. First, to overcome the heterogeneity of knowledge, given the knowledge relevant to diagnosis, MedKPL extracts and normalizes the relevant knowledge into a prompt sequence. Then, MedKPL integrates the knowledge prompt with the clinical note into a designed prompt for representation. Therefore, MedKPL can integrate medical knowledge into the models to enhance diagnosis and effectively transfer learned diagnosis capacity to unseen diseases using alternating relevant disease knowledge. The experimental results on two medical datasets show that our method can obtain better medical text classification results and can perform better in transfer and few-shot settings among datasets of different diseases.
+ 2023.clinicalnlp-1.33
+ lu-etal-2023-medical
+
+
+ Multilingual Clinical NER: Translation or Cross-lingual Transfer?
+ FélixGaschiUniversity of Lorraine
+ XavierFontaine
+ ParisaRastin
+ YannickToussaintUniversité de Lorraine
+ 289-311
+ Natural language tasks like Named Entity Recognition (NER) in the clinical domain on non-English texts can be very time-consuming and expensive due to the lack of annotated data. Cross-lingual transfer (CLT) is a way to circumvent this issue thanks to the ability of multilingual large language models to be fine-tuned on a specific task in one language and to provide high accuracy for the same task in another language. However, other methods leveraging translation models can be used to perform NER without annotated data in the target language, by either translating the training set or test set. This paper compares cross-lingual transfer with these two alternative methods, to perform clinical NER in French and in German without any training data in those languages. To this end, we release MedNERF a medical NER test set extracted from French drug prescriptions and annotated with the same guidelines as an English dataset. Through extensive experiments on this dataset and on a German medical dataset (Frei and Kramer, 2021), we show that translation-based methods can achieve similar performance to CLT but require more care in their design. And while they can take advantage of monolingual clinical language models, those do not guarantee better results than large general-purpose multilingual models, whether with cross-lingual transfer or translation.
+ 2023.clinicalnlp-1.34
+ gaschi-etal-2023-multilingual
+
+
+ UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition
+ AidanMannion
+ DidierSchwabUniversité Grenoble Alpes
+ LorraineGoeuriotUniversité Grenoble Alpes
+ 312-322
+ Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.
+ 2023.clinicalnlp-1.35
+ mannion-etal-2023-umls
+
+
+ WangLab at MEDIQA-Chat 2023: Clinical Note Generation from Doctor-Patient Conversations using Large Language Models
+ JohnGiorgi
+ AugustinTomaUniversity of Toronto
+ RonaldXie
+ SondraChen
+ KevinAn
+ GraceZhengUniversity of Toronto
+ BoWangVector Institute
+ 323-334
+ This paper describes our submission to the MEDIQA-Chat 2023 shared task for automatic clinical note generation from doctor-patient conversations. We report results for two approaches: the first fine-tunes a pre-trained language model (PLM) on the shared task data, and the second uses few-shot in-context learning (ICL) with a large language model (LLM). Both achieve high performance as measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and first, respectively, of all submissions to the shared task. Expert human scrutiny indicates that notes generated via the ICL-based approach with GPT-4 are preferred about as often as human-written notes, making it a promising path toward automated note generation from doctor-patient conversations.
+ 2023.clinicalnlp-1.36
+ giorgi-etal-2023-wanglab
+
+
+ Automatic Coding at Scale: Design and Deployment of a Nationwide System for Normalizing Referrals in the Chilean Public Healthcare System
+ FabiánVillenaUniversidad de Chile
+ MatíasRojas
+ FelipeArias
+ JorgePacheco
+ PaulinaVera
+ JocelynDunstanUniversidad de Chile
+ 335-343
+ The disease coding task involves assigning a unique identifier from a controlled vocabulary to each disease mentioned in a clinical document. This task is relevant since it allows information extraction from unstructured data to perform, for example, epidemiological studies about the incidence and prevalence of diseases in a determined context. However, the manual coding process is subject to errors as it requires medical personnel to be competent in coding rules and terminology. In addition, this process consumes a lot of time and energy, which could be allocated to more clinically relevant tasks. These difficulties can be addressed by developing computational systems that automatically assign codes to diseases. In this way, we propose a two-step system for automatically coding diseases in referrals from the Chilean public healthcare system. Specifically, our model uses a state-of-the-art NER model for recognizing disease mentions and a search engine system based on Elasticsearch for assigning the most relevant codes associated with these disease mentions. The system’s performance was evaluated on referrals manually coded by clinical experts. Our system obtained a MAP score of 0.63 for the subcategory level and 0.83 for the category level, close to the best-performing models in the literature. This system could be a support tool for health professionals, optimizing the coding and management process. Finally, to guarantee reproducibility, we publicly release the code of our models and experiments.
+ 2023.clinicalnlp-1.37
+ villena-etal-2023-automatic
+
+
+ Building blocks for complex tasks: Robust generative event extraction for radiology reports under domain shifts
+ SitongZhou
+ MelihaYetisgenUniversity of Washington
+ MariOstendorfUniversity of Washington
+ 344-357
+ This paper explores methods for extracting information from radiology reports that generalize across exam modalities to reduce requirements for annotated data. We demonstrate that multi-pass T5-based text-to-text generative models exhibit better generalization across exam modalities compared to approaches that employ BERT-based task-specific classification layers. We then develop methods that reduce the inference cost of the model, making large-scale corpus processing more feasible for clinical applications. Specifically, we introduce a generative technique that decomposes complex tasks into smaller subtask blocks, which improves a single-pass model when combined with multitask training. In addition, we leverage target-domain contexts during inference to enhance domain adaptation, enabling use of smaller models. Analyses offer insights into the benefits of different cost reduction strategies.
+ 2023.clinicalnlp-1.38
+ zhou-etal-2023-building
+
+
+ Intersectionality and Testimonial Injustice in Medical Records
+ KenyaAndrewsUniversity of Illinois at Chicago
+ BhuvniShah
+ LuChengUniversity of Illinois at Chicago
+ 358-372
+ Detecting testimonial injustice is an essential element of addressing inequities and promoting inclusive healthcare practices, many of which are life-critical. However, using a single demographic factor to detect testimonial injustice does not fully encompass the nuanced identities that contribute to a patient’s experience. Further, some injustices may only be evident when examining the nuances that arise through the lens of intersectionality. Ignoring such injustices can result in poor quality of care or life-endangering events. Thus, considering intersectionality could result in more accurate classifications and just decisions. To illustrate this, we use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice, employ fairness metrics (e.g. demographic parity, differential intersectional fairness, and subgroup fairness) to assess the severity to which subgroups are experiencing testimonial injustice, and analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice. From our analysis we found that with intersectionality we can better see disparities in how subgroups are treated and there are differences in how someone is treated based on the intersection of their demographic attributes. This has not been previously studied in clinical records, nor has it been proven through empirical study.
+ 2023.clinicalnlp-1.39
+ andrews-etal-2023-intersectionality
+
+
+ Interactive Span Recommendation for Biomedical Text
+ LouisBlankemeierStanford University
+ TheodoreZhaoMicrosoft
+ RobertTinn
+ SidKiblawiMicrosoft
+ YuGuMicrosoft
+ AkshayChaudhariStanford University and Subtle Medical
+ HoifungPoonMicrosoft
+ ShengZhangMicrosoft
+ MuWeiMicrosoft
+ J.Preston
+ 373-384
+ Motivated by the scarcity of high-quality labeled biomedical text, as well as the success of data programming, we introduce KRISS-Search. By leveraging the Unified Medical Language Systems (UMLS) ontology, KRISS-Search addresses an interactive few-shot span recommendation task that we propose. We first introduce unsupervised KRISS-Search and show that our method outperforms existing methods in identifying spans that are semantically similar to a given span of interest, with >50% AUPRC improvement relative to PubMedBERT. We then introduce supervised KRISS-Search, which leverages human interaction to improve the notion of similarity used by unsupervised KRISS-Search. Through simulated human feedback, we demonstrate an enhanced F1 score of 0.68 in classifying spans as semantically similar or different in the low-label setting, outperforming PubMedBERT by 2 F1 points. Finally, supervised KRISS-Search demonstrates competitive or superior performance compared to PubMedBERT in few-shot biomedical named entity recognition (NER) across five benchmark datasets, with an average improvement of 5.6 F1 points. We envision KRISS-Search increasing the efficiency of programmatic data labeling and also providing broader utility as an interactive biomedical search engine.
+ 2023.clinicalnlp-1.40
+ blankemeier-etal-2023-interactive
+
+
+ Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
+ Giridhar KaushikRamachandranGeorge Mason University
+ YujuanFuUniversity of Washington
+ BinHanUniversity of Washington
+ KevinLybargerGeorge Mason University
+ NicDobbins
+ OzlemUzunerGeorge Mason University
+ MelihaYetisgenUniversity of Washington
+ 385-393
+ Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.
+ 2023.clinicalnlp-1.41
+ ramachandran-etal-2023-prompt
+
+
+ Teddysum at MEDIQA-Chat 2023: an analysis of fine-tuning strategy for long dialog summarization
+ YongbinJeong
+ Ju-HyuckHan
+ Kyung MinChaeKonyang University
+ YousangCho
+ HyunbinSeoteddysum
+ KyungTaeLimSeoul National University of Science and Technology
+ Key-SunChoiKorea Advanced Institute of Science & Technology and Konyang University
+ YounggyunHahm
+ 394-402
+ In this paper, we introduce the design and various attempts for TaskB of MEDIQA-Chat 2023. The goal of TaskB in MEDIQA-Chat 2023 is to generate full clinical note from doctor-patient consultation dialogues. This task has several challenging issues, such as lack of training data, handling long dialogue inputs, and generating semi-structured clinical note which have section heads. To address these issues, we conducted various experiments and analyzed their results. We utilized the DialogLED model pre-trained on long dialogue data to handle long inputs, and we pre-trained on other dialogue datasets to address the lack of training data. We also attempted methods such as using prompts and contrastive learning for handling sections. This paper provides insights into clinical note generation through analyzing experimental methods and results, and it suggests future research directions.
+ 2023.clinicalnlp-1.42
+ jeong-etal-2023-teddysum
+
+
+ Rare Codes Count: Mining Inter-code Relations for Long-tail Clinical Text Classification
+ JiaminChen
+ XuhongLiBaidu
+ JuntingXi
+ LeiYuBeihang University
+ HaoyiXiongBaidu
+ 403-413
+ Multi-label clinical text classification, such as automatic ICD coding, has always been a challenging subject in Natural Language Processing, due to its long, domain-specific documents and long-tail distribution over a large label set. Existing methods adopt different model architectures to encode the clinical notes. Whereas without digging out the useful connections between labels, the model presents a huge gap in predicting performances between rare and frequent codes. In this work, we propose a novel method for further mining the helpful relations between different codes via a relation-enhanced code encoder to improve the rare code performance. Starting from the simple code descriptions, the model reaches comparable, even better performances than models with heavy external knowledge. Our proposed method is evaluated on MIMIC-III, a common dataset in the medical domain. It outperforms the previous state-of-art models on both overall metrics and rare code performances. Moreover, the interpretation results further prove the effectiveness of our methods. Our code is publicly available at https://github.com/jiaminchen-1031/Rare-ICD.
+ 2023.clinicalnlp-1.43
+ chen-etal-2023-rare
+
+
+ NewAgeHealthWarriors at MEDIQA-Chat 2023 Task A: Summarizing Short Medical Conversation with Transformers
+ PrakharMishra
+ Ravi ThejaDesettyGlance
+ 414-421
+ This paper presents the MEDIQA-Chat 2023 shared task organized at the ACL-Clinical NLP workshop. The shared task is motivated by the need to develop methods to automatically generate clinical notes from doctor-patient conversations. In this paper, we present our submission for MEDIQA-Chat 2023 Task A: Short Dialogue2Note Summarization. Manual creation of these clinical notes requires extensive human efforts, thus making it a time-consuming and expensive process. To address this, we propose an ensemble-based method over GPT-3, BART, BERT variants, and Rule-based systems to automatically generate clinical notes from these conversations. The proposed system achieves a score of 0.730 and 0.544 for both the sub-tasks on the test set (ranking 8th on the leaderboard for both tasks) and shows better performance compared to a baseline system using BART variants.
+ 2023.clinicalnlp-1.44
+ mishra-desetty-2023-newagehealthwarriors
+
+
+ Storyline-Centric Detection of Aphasia and Dysarthria in Stroke Patient Transcripts
+ PeiqiSui
+ KelvinWongWeill Cornell Medicine, Cornell University and Houston Methodist Research Institute
+ XiaohuiYuNA
+ JohnVolpiHouston Methodist Neurological Institute
+ StephenWongHouston Methodist Hospital and Weill Cornell Medicine
+ 422-432
+ Aphasia and dysarthria are both common symptoms of stroke, affecting around 30% and 50% of acute ischemic stroke patients. In this paper, we propose a storyline-centric approach to detect aphasia and dysarthria in acute stroke patients using transcribed picture descriptions alone. Our pipeline enriches the training set with healthy data to address the lack of acute stroke patient data and utilizes knowledge distillation to significantly improve upon a document classification baseline, achieving an AUC of 0.814 (aphasia) and 0.764 (dysarthria) on a patient-only validation set.
+ 2023.clinicalnlp-1.45
+ sui-etal-2023-storyline
+
+
+ Pre-trained language models in Spanish for health insurance coverage
+ ClaudioAracena
+ NicolásRodríguez
+ VictorRocco
+ JocelynDunstanUniversidad de Chile
+ 433-438
+ The field of clinical natural language processing (NLP) can extract useful information from clinical text. Since 2017, the NLP field has shifted towards using pre-trained language models (PLMs), improving performance in several tasks. Most of the research in this field has focused on English text, but there are some available PLMs in Spanish. In this work, we use clinical PLMs to analyze text from admission and medical reports in Spanish for an insurance and health provider to give a probability of no coverage in a labor insurance process. Our results show that fine-tuning a PLM pre-trained with the provider’s data leads to better results, but this process is time-consuming and computationally expensive. At least for this task, fine-tuning publicly available clinical PLM leads to comparable results to a custom PLM, but in less time and with fewer resources. Analyzing large volumes of insurance requests is burdensome for employers, and models can ease this task by pre-classifying reports that are likely not to have coverage. Our approach of entirely using clinical-related text improves the current models while reinforcing the idea of clinical support systems that simplify human labor but do not replace it. To our knowledge, the clinical corpus collected for this study is the largest one reported for the Spanish language.
+ 2023.clinicalnlp-1.46
+ aracena-etal-2023-pre
+
+
+ Utterance Classification with Logical Neural Network: Explainable AI for Mental Disorder Diagnosis
+ YeldarToleubayUniversity of Tsukuba, Tsukuba University
+ Don JovenAgravanteInternational Business Machines
+ DaikiKimuraIBM Research
+ BaihanLinColumbia University and IBM, International Business Machines
+ DjallelBouneffouf
+ MichiakiTatsuboriIBM Research
+ 439-446
+ In response to the global challenge of mental health problems, we proposes a Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis of mental disorders. Due to the lack of effective therapy coverage for mental disorders, there is a need for an AI solution that can assist therapists with the diagnosis. However, current Neural Network models lack explainability and may not be trusted by therapists. The LNN is a Recurrent Neural Network architecture that combines the learning capabilities of neural networks with the reasoning capabilities of classical logic-based AI. The proposed system uses input predicates from clinical interviews to output a mental disorder class, and different predicate pruning techniques are used to achieve scalability and higher scores. In addition, we provide an insight extraction method to aid therapists with their diagnosis. The proposed system addresses the lack of explainability of current Neural Network models and provides a more trustworthy solution for mental disorder diagnosis.
+ 2023.clinicalnlp-1.47
+ toleubay-etal-2023-utterance
+
+
+ A Survey of Evaluation Methods of Generated Medical Textual Reports
+ YongxinZhouLaboratoire d’Informatique de Grenoble
+ FabienRingevalUniversity of Grenoble-Alpes
+ FrançoisPortetUniversité Grenoble Alpes
+ 447-459
+ Medical Report Generation (MRG) is a sub-task of Natural Language Generation (NLG) and aims to present information from various sources in textual form and synthesize salient information, with the goal of reducing the time spent by domain experts in writing medical reports and providing support information for decision-making. Given the specificity of the medical domain, the evaluation of automatically generated medical reports is of paramount importance to the validity of these systems. Therefore, in this paper, we focus on the evaluation of automatically generated medical reports from the perspective of automatic and human evaluation. We present evaluation methods for general NLG evaluation and how they have been applied to domain-specific medical tasks. The study shows that MRG evaluation methods are very diverse, and that further work is needed to build shared evaluation methods. The state of the art also emphasizes that such an evaluation must be task specific and include human assessments, requesting the participation of experts in the field.
+ 2023.clinicalnlp-1.48
+ zhou-etal-2023-survey
+
+
+ UMASS_BioNLP at MEDIQA-Chat 2023: Can LLMs generate high-quality synthetic note-oriented doctor-patient conversations?
+ JundaWang
+ ZonghaiYaoUniversity of Massachusetts at Amherst
+ AvijitMitra
+ SamuelOsebe
+ ZhichaoYangUniversity of Massachusetts, Amherst
+ HongYuColumbia University
+ 460-471
+ This paper presents UMASS_BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets. The experiment results demonstrate that our approaches yield reasonable performance as evaluated by automatic metrics such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we conducted a comparative analysis between our proposed method and ChatGPT and GPT-4. This analysis also investigates the potential of utilizing cooperation LLMs to generate high-quality datasets.
+ 2023.clinicalnlp-1.49
+ wang-etal-2023-umass
+
+
+ HealthMavericks@MEDIQA-Chat 2023: Benchmarking different Transformer based models for Clinical Dialogue Summarization
+ KunalSuriOptum,India
+ SaumajitSaha
+ AtulSingh
+ 472-489
+ In recent years, we have seen many Transformer based models being created to address Dialog Summarization problem. While there has been a lot of work on understanding how these models stack against each other in summarizing regular conversations such as the ones found in DialogSum dataset, there haven’t been many analysis of these models on Clinical Dialog Summarization. In this article, we describe our solution to MEDIQA-Chat 2023 Shared Tasks as part of ACL-ClinicalNLP 2023 workshop which benchmarks some of the popular Transformer Architectures such as BioBart, Flan-T5, DialogLED, and OpenAI GPT3 on the problem of Clinical Dialog Summarization. We analyse their performance on two tasks - summarizing short conversations and long conversations. In addition to this, we also benchmark two popular summarization ensemble methods and report their performance.
+ 2023.clinicalnlp-1.50
+ suri-etal-2023-healthmavericks
+
+
+ SummQA at MEDIQA-Chat 2023: In-Context Learning with GPT-4 for Medical Summarization
+ YashMathur
+ SankethRangreji
+ RaghavKapoor
+ MedhaPalavalli
+ AmandaBertschCarnegie Mellon University
+ MatthewGormleySchool of Computer Science, Carnegie Mellon University and 3M
+ 490-502
+ Medical dialogue summarization is challenging due to the unstructured nature of medical conversations, the use of medical terminologyin gold summaries, and the need to identify key information across multiple symptom sets. We present a novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA 2023 Shared Task. Our approach for sectionwise summarization (Task A) is a two-stage process of selecting semantically similar dialogues and using the top-k similar dialogues as in-context examples for GPT-4. For full-note summarization (Task B), we use a similar solution with k=1. We achieved 3rd place in Task A (2nd among all teams), 4th place in Task B Division Wise Summarization (2nd among all teams), 15th place in Task A Section Header Classification (9th among all teams), and 8th place among all teams in Task B. Our results highlight the effectiveness of few-shot prompting for this task, though we also identify several weaknesses of prompting-based approaches. We compare GPT-4 performance with several finetuned baselines. We find that GPT-4 summaries are more abstractive and shorter. We make our code publicly available.
+ 2023.clinicalnlp-1.51
+ mathur-etal-2023-summqa
+
+
+ Overview of the MEDIQA-Chat 2023 Shared Tasks on the Summarization & Generation of Doctor-Patient Conversations
+ AsmaBen AbachaMicrosoft, USA
+ Wen-waiYim
+ GriffinAdams
+ NealSnider
+ MelihaYetisgenUniversity of Washington
+ 503-513
+ Automatic generation of clinical notes from doctor-patient conversations can play a key role in reducing daily doctors’ workload and improving their interactions with the patients. MEDIQA-Chat 2023 aims to advance and promote research on effective solutions through shared tasks on the automatic summarization of doctor-patient conversations and on the generation of synthetic dialogues from clinical notes for data augmentation. Seventeen teams participated in the challenge and experimented with a broad range of approaches and models. In this paper, we describe the three MEDIQA-Chat 2023 tasks, the datasets, and the participants’ results and methods. We hope that these shared tasks will lead to additional research efforts and insights on the automatic generation and evaluation of clinical notes.
+ 2023.clinicalnlp-1.52
+ ben-abacha-etal-2023-overview
+
+
+ Transfer Learning for Low-Resource Clinical Named Entity Recognition
+ NevasiniSasikumar
+ Krishna Sri IpsitMantri
+ 514-518
+ We propose a transfer learning method that adapts a high-resource English clinical NER model to low-resource languages and domains using only small amounts of in-domain annotated data. Our approach involves translating in-domain datasets to English, fine-tuning the English model on the translated data, and then transferring it to the target language/domain. Experiments on Spanish, French, and conversational clinical text datasets show accuracy gains over models trained on target data alone. Our method achieves state-of-the-art performance and can enable clinical NLP in more languages and modalities with limited resources.
+ 2023.clinicalnlp-1.53
+ sasikumar-mantri-2023-transfer
+
+
+ IUTEAM1 at MEDIQA-Chat 2023: Is simple fine tuning effective for multi layer summarization of clinical conversations?
+ DhananjaySrivastava
+ 519-523
+ Clinical conversation summarization has become an important application of Natural language Processing. In this work, we intend to analyze summarization model ensembling approaches, that can be utilized to improve the overall accuracy of the generated medical report called chart note. The work starts with a single summarization model creating the baseline. Then leads to an ensemble of summarization models trained on a separate section of the chart note. This leads to the final approach of passing the generated results to another summarization model in a multi-layer/stage fashion for better coherency of the generated text. Our results indicate that although an ensemble of models specialized in each section produces better results, the multi-layer/stage approach does not improve accuracy. The code for the above paper is available at https://github.com/dhananjay-srivastava/MEDIQA-Chat-2023-iuteam1.git
+ 2023.clinicalnlp-1.54
+ srivastava-2023-iuteam1
+
+
+ Care4Lang at MEDIQA-Chat 2023: Fine-tuning Language Models for Classifying and Summarizing Clinical Dialogues
+ AmalAlqahtaniGeorge Washington University
+ RanaSalamaGeorge Washington University
+ MonaDiabGeorge Washington University
+ AbdouYoussefGeorge Washington University
+ 524-528
+ Summarizing medical conversations is one of the tasks proposed by MEDIQA-Chat to promote research on automatic clinical note generation from doctor-patient conversations. In this paper, we present our submission to this task using fine-tuned language models, including T5, BART and BioGPT models. The fine-tuned models are evaluated using ensemble metrics including ROUGE, BERTScore andBLEURT. Among the fine-tuned models, Flan-T5 achieved the highest aggregated score for dialogue summarization.
+ 2023.clinicalnlp-1.55
+ alqahtani-etal-2023-care4lang
+
+
+ Calvados at MEDIQA-Chat 2023: Improving Clinical Note Generation with Multi-Task Instruction Finetuning
+ KirillMilintsevichUniversité de Caen Basse Normandie and University of Tartu
+ NavneetAgarwal
+ 529-535
+ This paper presents our system for the MEDIQA-Chat 2023 shared task on medical conversation summarization. Our approach involves finetuning a LongT5 model on multiple tasks simultaneously, which we demonstrate improves the model’s overall performance while reducing the number of factual errors and hallucinations in the generated summary. Furthermore, we investigated the effect of augmenting the data with in-text annotations from a clinical named entity recognition model, finding that this approach decreased summarization quality. Lastly, we explore using different text generation strategies for medical note generation based on the length of the note. Our findings suggest that the application of our proposed approach can be beneficial for improving the accuracy and effectiveness of medical conversation summarization.
+ 2023.clinicalnlp-1.56
+ milintsevich-agarwal-2023-calvados
+
+
+ DS4DH at MEDIQA-Chat 2023: Leveraging SVM and GPT-3 Prompt Engineering for Medical Dialogue Classification and Summarization
+ BoyaZhang
+ RahulMishra
+ DouglasTeodoroUniversity of Geneva
+ 536-545
+ This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of one-shot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.
+ 2023.clinicalnlp-1.57
+ zhang-etal-2023-ds4dh
+
+
+ GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning
+ XiangruTangYale University
+ AndrewTran
+ JeffreyTan
+ MarkGersteinYale University
+ 546-554
+ This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared task, encompassing both subtask A and subtask B. We approach the task as a dialogue summarization problem and implement two distinct pipelines: (a) a fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b) few-shot in-context learning (ICL) using a large language model, GPT-4. Both methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1 (deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421, respectively. Additionally, we predict the associated section headers using RoBERTa and SciBERT based classification models. Our team ranked fourth among all teams, while each team is allowed to submit three runs as part of their submission. We also utilize expert annotations to demonstrate that the notes generated through the ICL GPT-4 are better than all other baselines. The code for our submission is available.
+ 2023.clinicalnlp-1.58
+ tang-etal-2023-gersteinlab
+
+
+
diff --git a/data/xml/2023.dialdoc.xml b/data/xml/2023.dialdoc.xml
new file mode 100644
index 0000000000..c811d5bcf8
--- /dev/null
+++ b/data/xml/2023.dialdoc.xml
@@ -0,0 +1,170 @@
+
+
+
+
+ Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
+ SmarandaMuresan
+ VivianChen
+ KenningtonCasey
+ VandykeDavid
+ DethlefsNina
+ InoueKoji
+ EkstedtErik
+ UltesStefan
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.dialdoc-1
+ dialdoc
+
+
+ 2023.dialdoc-1.0
+ dialdoc-2023-dialdoc
+
+
+ Cross-lingual Data Augmentation for Document-grounded Dialog Systems in Low Resource Languages
+ QiGou
+ ZehuaXia
+ WenzheDu
+ 1-7
+ This paper proposes a framework to address the issue of data scarcity in Document-Grounded Dialogue Systems(DGDS). Our model leverages high-resource languages to enhance the capability of dialogue generation in low-resource languages. Specifically, We present a novel pipeline CLEM (Cross-Lingual Enhanced Model) including adversarial training retrieval (Retriever and Re-ranker), and Fid (fusion-in-decoder) generator. To further leverage high-resource language, we also propose an innovative architecture to conduct alignment across different languages with translated training. Extensive experiment results demonstrate the effectiveness of our model and we achieved 4th place in the DialDoc 2023 Competition. Therefore, CLEM can serve as a solution to resource scarcity in DGDS and provide useful guidance for multi-lingual alignment tasks.
+ 2023.dialdoc-1.1
+ gou-etal-2023-cross
+
+
+ MoQA: Benchmarking Multi-Type Open-Domain Question Answering
+ HowardYenPrinceton University
+ TianyuGao
+ JinhyukLeeGoogle
+ DanqiChenDepartment of Computer Science, Princeton University
+ 8-29
+ Previous research on open-domain question answering (QA) mainly focuses on questions with short answers. However, information-seeking QA often requires various formats of answers depending on the nature of the questions, e.g., why/how questions typically require a long answer. In this paper, we present MoQA, a benchmark for open-domain QA that requires building one system that can provide short, medium, long, and yes/no answers to different questions accordingly. MoQA builds upon Natural Questions with multiple types of questions and additional crowdsourcing efforts to ensure high query quality. We adapt state-of-the-art models, and reveal unique findings in multi-type open-domain QA: (1) For retriever-reader models, training one retriever on all types achieves the overall best performance, but it is challenging to train one reader model to output answers of different formats, or to train a question classifier to distinguish between types; (2) An end-to-end closed-book QA model trained on multiple types struggles with the task across the board; (3) State-of-the-art large language models such as the largest GPT-3 models (Brown et al., 2020; Ouyang et al., 2022) also lag behind open-book QA models. Our benchmark and analysis call for more effort into building versatile open-domain QA models in the future.
+ 2023.dialdoc-1.2
+ yen-etal-2023-moqa
+
+
+ Exploration of multilingual prompts in document-grounded dialogue
+ XiaochengZhang
+ HuangQing
+ FuLin
+ 30-35
+ Transferring DGD models from high-resource languages to low-resource languages is a meaningful but challenging task. Being able to provide multilingual responses to multilingual documents further complicates the task. This paper describes our method at DialDoc23 Shared Task (Document-Grounded Dialogue and Conversational Question Answering) for generate responses based on the most relevant passage retrieved. We divide it into three steps of retrieval, re-ranking and generation. Our methods include negative sample augmentation, prompt learning, pseudo-labeling and ensemble. On the submission page, we rank 2nd based on the sum of token-level F1, SacreBleu and Rouge-L scores used for the final evaluation, and get the total score of 210.25.
+ 2023.dialdoc-1.3
+ zhang-etal-2023-exploration
+
+
+ Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
+ HsuanSu
+ ShachiH. KumarIntel Labs
+ SahisnuMazumderIntel Labs, USA
+ WendaChen
+ RameshManuvinakurike
+ EdaOkurIntel Labs
+ SauravSahayIntel
+ LamaNachman
+ Shang-TseChenNational Taiwan University
+ Hung-yiLeeNational Taiwan University
+ 36-43
+ With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history.However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training.In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems’ responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses.
+ 2023.dialdoc-1.4
+ su-etal-2023-position
+
+
+ Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models
+ JunLiu
+ ShuangCheng
+ ZinengZhou
+ YangGu, Chinese Academy of Sciences
+ JianYe
+ HaiyongLuo
+ 44-51
+ The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user’s queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model’s ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.
+ 2023.dialdoc-1.5
+ liu-etal-2023-enhancing-multilingual
+
+
+ Enhanced Training Methods for Multiple Languages
+ HaiLi
+ YangLiShanghai Jiaotong University
+ 52-56
+ Document-grounded dialogue generation based on multilingual is a challenging and realistic task. Unlike previous tasks, it need to tackle with multiple high-resource languages facilitating low-resource languages. This paper summarizes our research based on a three-stage pipeline that includes retrieval, re-rank and generation where each component is individually optimized. In different languages with limited data scenarios, we mainly improve the robustness of the pipeline through data augmentation and embedding perturbation with purpose of improving the performance designing three training methods: cross-language enhancement training, weighted training with neighborhood distribution augmentation, and ensemble adversarial training, all of that can be used as plug and play modules. Through experiments with different settings, it has been shown that our methods can effectively improve the generalization performance of pipeline with score ranking 6th among the public submissions on leaderboards.
+ 2023.dialdoc-1.6
+ li-li-2023-enhanced
+
+
+ SLDT: Sequential Latent Document Transformer for Multilingual Document-based Dialogue
+ ZhanyuMa
+ ZemingLiu
+ JianYe
+ 57-67
+ Multilingual document-grounded dialogue, where the system is required to generate responses based on both the conversation Multilingual context and external knowledge sources.Traditional pipeline methods for knowledge identification and response generation, while effective in certain scenarios, suffer from error propagation issues and fail to capture the interdependence between these two sub-tasks. To overcome these challenges, we propose the application of the SLDT method, which treats passage-knowledge selection as a sequential decision process rather than a single-step decision process.We achieved winner 3rd in dialdoc 2023 and we also validated the effectiveness of our method on other datasets. The ablation experiment also shows that our method significantly improves the basic model compared to other methods.
+ 2023.dialdoc-1.7
+ ma-etal-2023-sldt
+
+
+ A Dialogue System for Assessing Activities of Daily Living: Improving Consistency with Grounded Knowledge
+ ZhechengSheng
+ RaymondFinzelUniversity of Minnesota - Twin Cities
+ MichaelLucke
+ SheenaDufresne
+ MariaGiniUniversity of Minnesota , Twin Ciities
+ SergueiPakhomovUniversity of Minnesota - Twin Cities
+ 68-79
+ In healthcare, the ability to care for oneself is reflected in the “Activities of Daily Living (ADL),” which serve as a measure of functional ability (functioning). A lack of functioning may lead to poor living conditions requiring personal care and assistance. To accurately identify those in need of support, assistance programs continuously evaluate participants’ functioning across various domains. However, the assessment process may encounter consistency issues when multiple assessors with varying levels of expertise are involved. Novice assessors, in particular, may lack the necessary preparation for real-world interactions with participants. To address this issue, we developed a dialogue system that simulates interactions between assessors and individuals of varying functioning in a natural and reproducible way. The dialogue system consists of two major modules, one for natural language understanding (NLU) and one for natural language generation (NLG), respectively. In order to generate responses consistent with the underlying knowledge base, the dialogue system requires both an understanding of the user’s query and of biographical details of an individual being simulated. To fulfill this requirement, we experimented with query classification and generated responses based on those biographical details using some recently released InstructGPT-like models.
+ 2023.dialdoc-1.8
+ sheng-etal-2023-dialogue
+
+
+ C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue Evaluation
+ LiliangRen
+ MankeeratSidhu
+ QiZeng
+ RevanthGangi Reddy
+ HengJi
+ ChengXiangZhai
+ 80-85
+ Existing reference-free turn-level evaluation metrics for chatbots inadequately capture the interaction between the user and the system. Consequently, they often correlate poorly with human evaluations. To address this issue, we propose a novel model-agnostic approach that leverages Conditional Pointwise Mutual Information (C-PMI) to measure the turn-level interaction between the system and the user based on a given evaluation dimension. Experimental results on the widely used FED dialogue evaluation dataset demonstrate that our approach significantly improves the correlation with human judgment compared with existing evaluation systems. By replacing the negative log-likelihood-based scorer with our proposed C-PMI scorer, we achieve a relative 60.5% higher Spearman correlation on average for the FED evaluation metric. Our code is publicly available at https://github.com/renll/C-PMI.
+ 2023.dialdoc-1.9
+ ren-etal-2023-c
+
+
+ ConvRGX: Recognition, Generation, and Extraction for Self-trained Conversational Question Answering
+ TianhuaZhang
+ LipingTang
+ WeiFangMassachusetts Institute of Technology
+ HongyinLuoMassachusetts Institute of Technology
+ XixinWuThe Chinese University of Hong Kong
+ HelenMeng
+ JamesGlass
+ 86-100
+ Collecting and constructing human-annotated corpora for training conversational question-answering (CQA) models has recently been shown to be inefficient and costly. To solve this problem, previous works have proposed training QA models with automatically generated QA data. In this work, we extend earlier studies on QA synthesis, and propose an efficient QA data generation algorithm under conversational settings. Our model recognizes potential dialogue topics, generates corresponding questions, and extracts answers from grounding passages. To improve the quality of generated QAs and downstream self-training of CQA models, we propose dropout and agreement-based QA selection methods. We conduct experiments on both data augmentation and domain adaptation settings. Experiments on the QuAC and Doc2Dial tasks show that the proposed method can significantly improve the quality of generated QA data, and also improves the accuracy of self-trained CQA models based on the constructed training corpora.
+ 2023.dialdoc-1.10
+ zhang-etal-2023-convrgx
+
+
+ Language-Agnostic Transformers and Assessing ChatGPT-Based Query Rewriting for Multilingual Document-Grounded QA
+ SrinivasGowriraj
+ Soham DineshTiwari
+ MitaliPotnis
+ SrijanBansal
+ TerukoMitamuraCarnegie Mellon University
+ EricNybergCarnegie Mellon University
+ 101-108
+ The DialDoc 2023 shared task has expanded the document-grounded dialogue task to encompass multiple languages, despite having limited annotated data. This paper assesses the effectiveness of both language-agnostic and language-aware paradigms for multilingual pre-trained transformer models in a bi-encoder-based dense passage retriever (DPR), concluding that the language-agnostic approach is superior. Additionally, the study investigates the impact of query rewriting techniques using large language models, such as ChatGPT, on multilingual, document-grounded question-answering systems. The experiments conducted demonstrate that, for the examples examined, query rewriting does not enhance performance compared to the original queries. This failure is due to topic switching in final dialogue turns and irrelevant topics being considered for query rewriting.
+ 2023.dialdoc-1.11
+ gowriraj-etal-2023-language
+
+
+ Follow the Knowledge: Structural Biases and Artefacts in Knowledge Grounded Dialog Datasets
+ EhsanLotfiUniversiteit Antwerpen
+ MaximeDe BruynAntwerp University
+ Jeska.buhmann@uantwerpen.beJeska.buhmann@uantwerpen.beNA
+ WalterDaelemansUniversity of Antwerp
+ 109-121
+ Crowd-sourcing has been one of the primary ways to curate conversational data, specially for certain scenarios like grounding in knowledge. In this setting, using online platforms like AMT, non-expert participants are hired to converse with each other, following instructions which try to guide the outcome towards the desired format. The resulting data then is used for different parts of dialog modelling like knowledge selection and response selection/generation.In this work, we take a closer look into two of the most popular knowledge grounded dialog (KGD) datasets. Investigating potential biases and artefacts in knowledge selection labels, we observe that in many cases the ‘knowledge selection flow’ simply follows the order of presented knowledge pieces. In Wizard of Wikipedia (the most popular KGD dataset) we use simple content-agnostic models based on this bias to get significant knowledge selection performance. In Topical-Chat we see a similar correlation between the knowledge selection sequence and the order of entities and their segments, as provided to crowd-source workers. We believe that the observed results, question the significance and origin of the presumed dialog-level attributes like ‘knowledge flow’ in these crowd-sourced datasets.
+ 2023.dialdoc-1.12
+ lotfi-etal-2023-follow
+
+
+
diff --git a/data/xml/2023.iwslt.xml b/data/xml/2023.iwslt.xml
index 12ebc88e65..6ceb0bfaed 100644
--- a/data/xml/2023.iwslt.xml
+++ b/data/xml/2023.iwslt.xml
@@ -1,6 +1,6 @@
-
+
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
ElizabethSalesky
@@ -19,66 +19,66 @@
FINDINGS OF THE IWSLT 2023 EVALUATION CAMPAIGN
- SwetaAgrawalUmd
- AntoniosAnastasopoulosGmu
- LuisaBentivogliFbk
+ SwetaAgrawalUMD
+ AntoniosAnastasopoulosGMU
+ LuisaBentivogliFBK
OndřejBojarCharles U.
ClaudiaBorgU. Malta
- MarineCarpuatUmd
- RoldanoCattoniFbk
- MauroCettoloFbk
+ MarineCarpuatUMD
+ RoldanoCattoniFBK
+ MauroCettoloFBK
MingdaChenMeta
- WilliamChenCmu
- KhalidChoukriElda
- AlexandraChronopoulouLmu
- AnnaCurreyAws
- ThierryDeclerckDfki
+ WilliamChenCMU
+ KhalidChoukriELDA
+ AlexandraChronopoulouLMU
+ AnnaCurreyAWS
+ ThierryDeclerckDFKI
QianqianDongBytedance
- KevinDuhJhu
+ KevinDuhJHU
YannickEstèveAvignon U.
- MarcelloFedericoAws
+ MarcelloFedericoAWS
SouhirGahbicheAirbus
BarryHaddowU. Edinburgh
- BenjaminHsuAws
- PhuMon HtutAws
+ BenjaminHsuAWS
+ PhuMon HtutAWS
HirofumiInagumaMeta
DávidJavorskýCharles U.
- JohnJudgeDcu
- YasumasaKanoNaist
+ JohnJudgeDCU
+ YasumasaKanoNAIST
TomKoBytedance
RishuKumarCharles U.
PengweiLiMeta
XutaiMaMeta
- PrashantMathurAws
+ PrashantMathurAWS
EvgenyMatusovAppTek
- PaulMcNameeJhu
+ PaulMcNameeJHU
JohnP. McCraeU. Galway
- KentonMurrayJhu
- MariaNadejdeAws
- SatoshiNakamuraNaist
- MatteoNegriFbk
+ KentonMurrayJHU
+ MariaNadejdeAWS
+ SatoshiNakamuraNAIST
+ MatteoNegriFBK
HaNguyenAvignon U.
- JanNiehuesKit
- XingNiuAws
+ JanNiehuesKIT
+ XingNiuAWS
AtulKr. OjhaU. Galway
JohnE. OrtegaNortheastern U.
ProyagPalU. Edinburgh
JuanPinoMeta
- Lonnekevan der PlasIdiap
+ Lonnekevan der PlasIDIAP
PeterPolákCharles U.
- ElijahRippethUmd
- ElizabethSaleskyJhu
- JiatongShiCmu
+ ElijahRippethUMD
+ ElizabethSaleskyJHU
+ JiatongShiCMU
MatthiasSperberApple
SebastianStükerZoom
- KatsuhitoSudohNaist
+ KatsuhitoSudohNAIST
YunTangMeta
- BrianThompsonAws
+ BrianThompsonAWS
KevinTranMeta
MarcoTurchiZoom
- AlexWaibelCmu
+ AlexWaibelCMU
MingxuanWangBytedance
- ShinjiWatanabeCmu
+ ShinjiWatanabeCMU
RodolfoZevallosU. Pompeu Fabra
1-61
This paper reports on the shared tasks organized by the 20th IWSLT Conference. The shared tasks address 9 scientific challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, speech-to-speech translation, multilingual, dialect and low-resource speech translation, and formality control. The shared tasks attracted a total of 38 submissions by 31 teams. The growing interest towards spoken language translation is also witnessed by the constantly increasing number of shared task organizers and contributors to the overview paper, almost evenly distributed across industry and academia.
@@ -95,12 +95,13 @@
62-78
We present the ACL 60/60 evaluation sets for multilingual translation of ACL 2022 technical presentations into 10 target languages. This dataset enables further research into multilingual speech translation under realistic recording conditions with unsegmented audio and domain-specific terminology, applying NLP tools to text and speech in the technical domain, and evaluating and improving model robustness to diverse speaker demographics.
2023.iwslt-1.2
+ 2023.iwslt-1.2.dataset.zip
salesky-etal-2023-evaluating
The MineTrans Systems for IWSLT 2023 Offline Speech Translation and Speech-to-Speech Translation Tasks
YichaoDuUniversity of Science and Technology of China
- GuoZhengshengTencent
+ GuoZhengshengtencent
JinchuanTianPeking University
ZhiruiZhangTencent AI Lab
XingWangTencent
@@ -127,7 +128,7 @@
The USTC’s Dialect Speech Translation System for IWSLT 2023
PanDengUniversity of Science and Technology of China
ShihaoChenUniversity of Science and Technology of China
- WeitaiZhangUstc
+ WeitaiZhangUSTC
JieZhangUniversity of Science &Technology of China
LirongDaiUniversity of Science &Technology of China
102-112
@@ -164,7 +165,7 @@
Enhancing Video Translation Context with Object Labels
JeremyGwinnupAir Force Research Laboratory
TimAndersonAir Force Research Laboratory
- BrianOreAfrl
+ BrianOreAFRL
EricHansenAir Force Research Laboratory
KevinDuhJohns Hopkins University
130-137
@@ -218,7 +219,7 @@
MT Metrics Correlate with Human Ratings of Simultaneous Speech Translation
DominikMacháčekCharles University, MFF UFAL
OndřejBojarCharles University, MFF UFAL
- RajDabreNict
+ RajDabreNICT
169-179
There have been several meta-evaluation studies on the correlation between human ratings and offline machine translation (MT) evaluation metrics such as BLEU, chrF2, BertScore and COMET. These metrics have been used to evaluate simultaneous speech translation (SST) but their correlations with human ratings of SST, which has been recently collected as Continuous Ratings (CR), are unclear. In this paper, we leverage the evaluations of candidate systems submitted to the English-German SST task at IWSLT 2022 and conduct an extensive correlation analysis of CR and the aforementioned metrics. Our study reveals that the offline metrics are well correlated with CR and can be reliably used for evaluating machine translation in simultaneous mode, with some limitations on the test set size. We conclude that given the current quality levels of SST, these metrics can be used as proxies for CR, alleviating the need for large scale human evaluation. Additionally, we observe that correlations of the metrics with translation as a reference is significantly higher than with simultaneous interpreting, and thus we recommend the former for reliable evaluation.
2023.iwslt-1.12
@@ -270,13 +271,13 @@
Submission of USTC’s System for the IWSLT 2023 - Offline Speech Translation Track
- XinyuanZhouIflytek
+ XinyuanZhouiflytek
JianweiCuiUniversity of Science and Technology of China
- ZhongyiYeIflytek
+ ZhongyiYeiflytek
YichiWangUniversity of Science and Technology of China
LuzhenXuUniversity of Science and Technology of China
- HanyiZhangIflytek
- WeitaiZhangUstc
+ HanyiZhangiflytek
+ WeitaiZhangUSTC
LirongDaiUniversity of Science and Technology of China
194-201
This paper describes the submissions of the research group USTC-NELSLIP to the 2023 IWSLT Offline Speech Translation competition, which involves translating spoken English into written Chinese. We utilize both cascaded models and end-to-end models for this task. To improve the performance of the cascaded models, we introduce Whisper to reduce errors in the intermediate source language text, achieving a significant improvement in ASR recognition performance. For end-to-end models, we propose Stacked Acoustic-and-Textual En- coding extension (SATE-ex), which feeds the output of the acoustic decoder into the textual decoder for information fusion and to prevent error propagation. Additionally, we improve the performance of the end-to-end system in translating speech by combining the SATE-ex model with the encoder-decoder model through ensembling.
@@ -287,7 +288,7 @@
I2R’s End-to-End Speech Translation System for IWSLT 2023 Offline Shared Task
MuhammadHuzaifahAgency for Science, Technology and Research
KyeMin TanInstitute for Infocomm Research, A*STAR
- RichengDuanAstar
+ RichengDuanASTAR
202-210
This paper describes I2R’s submission to the offline speech translation track for IWSLT 2023. We focus on an end-to-end approach for translation from English audio to German text, one of the three available language directions in this year’s edition. The I2R system leverages on pretrained models that have been exposed to large-scale audio and text data for our base model. We introduce several stages of additional pretraining followed by fine-tuning to adapt the system for the downstream speech translation task. The strategy is supplemented by other techniques such as data augmentation, domain tagging, knowledge distillation, and model ensemble, among others. We evaluate the system on several publicly available test sets for comparison.
2023.iwslt-1.16
@@ -319,7 +320,7 @@
SalimaMdhaffarLIA - University of Avignon
GaëlleLaperrièreAvignon University LIA
LucasMaisonLIA - Avignon University
- SameerKhuranaMit
+ SameerKhuranaMIT
YannickEstèveLIA - Avignon University
219-226
This paper describes the ON-TRAC consortium speech translation systems developed for IWSLT 2023 evaluation campaign. Overall, we participated in three speech translation tracks featured in the low-resource and dialect speech translation shared tasks, namely; i) spoken Tamasheq to written French, ii) spoken Pashto to written French, and iii) spoken Tunisian to written English. All our primary submissions are based on the end-to-end speech-to-text neural architecture using a pretrained SAMU-XLSR model as a speech encoder and a mbart model as a decoder. The SAMU-XLSR model is built from the XLS-R 128 in order to generate language agnostic sentence-level embeddings. This building is driven by the LaBSE model trained on multilingual text dataset. This architecture allows us to improve the input speech representations and achieve significant improvements compared to conventional end-to-end speech translation systems.
@@ -405,7 +406,7 @@
HengchaoShangHuawei Technologies Co., Ltd.
DaimengWeiHuawei Technologies Co., Ltd.
MinZhangHuawei
- ShiminTaoHuawei
+ ShiminTaohuawei
HaoYangHuawei Co. Ltd
277-282
This paper describes our work on the IWSLT2023 Speech-to-Speech task. Our proposed cascaded system consists of an ensemble of Conformer and S2T-Transformer-based ASR models, a Transformer-based MT model, and a Diffusion-based TTS model. Our primary focus in this competition was to investigate the modeling ability of the Diffusion model for TTS tasks in high-resource scenarios and the role of TTS in the overall S2S task. To this end, we proposed DTS, an end-to-end diffusion-based TTS model that takes raw text as input and generates waveform by iteratively denoising on pure Gaussian noise. Compared to previous TTS models, the speech generated by DTS is more natural and performs better in code-switching scenarios. As the training process is end-to-end, it is relatively straightforward. Our experiments demonstrate that DTS outperforms other TTS models on the GigaS2S benchmark, and also brings positive gains for the entire S2S system.
@@ -476,15 +477,15 @@
NAIST Simultaneous Speech-to-speech Translation System for IWSLT 2023
- RyoFukudaNaist
- YutaNishikawaNaist
+ RyoFukudaNAIST
+ YutaNishikawaNAIST
YasumasaKanoNara Institute of Science and Technology
- YukaKoNaist
+ YukaKoNAIST
TomoyaYanagitaNara Institute of Science and Technology
KosukeDoiNara Institute of Science and Technology
- ManaMakinaeNaist
- SakrianiSaktiJaist/naist
- KatsuhitoSudohNaist
+ ManaMakinaeNAIST
+ SakrianiSaktiJAIST/NAIST
+ KatsuhitoSudohNAIST
SatoshiNakamuraNara Institute of Science and Technology
330-340
This paper describes NAIST’s submission to the IWSLT 2023 Simultaneous Speech Translation task: English-to-German, Japanese, Chinese speech-to-text translation and English-to-Japanese speech-to-speech translation. Our speech-to-text system uses an end-to-end multilingual speech translation model based on large-scale pre-trained speech and text models. We add Inter-connections into the model to incorporate the outputs from intermediate layers of the pre-trained speech model and augment prefix-to-prefix text data using Bilingual Prefix Alignment to enhance the simultaneity of the offline speech translation model. Our speech-to-speech system employs an incremental text-to-speech module that consists of a Japanese pronunciation estimation model, an acoustic model, and a neural vocoder.
@@ -515,11 +516,11 @@
Tagged End-to-End Simultaneous Speech Translation Training Using Simultaneous Interpretation Data
- YukaKoNaist
- RyoFukudaNaist
- YutaNishikawaNaist
+ YukaKoNAIST
+ RyoFukudaNAIST
+ YutaNishikawaNAIST
YasumasaKanoNara Institute of Science and Technology
- KatsuhitoSudohNaist
+ KatsuhitoSudohNAIST
SatoshiNakamuraNara Institute of Science and Technology
363-375
Simultaneous speech translation (SimulST) translates partial speech inputs incrementally. Although the monotonic correspondence between input and output is preferable for smaller latency, it is not the case for distant language pairs such as English and Japanese. A prospective approach to this problem is to mimic simultaneous interpretation (SI) using SI data to train a SimulST model. However, the size of such SI data is limited, so the SI data should be used together with ordinary bilingual data whose translations are given in offline. In this paper, we propose an effective way to train a SimulST model using mixed data of SI and offline. The proposed method trains a single model using the mixed data with style tags that tell the model to generate SI- or offline-style outputs. Experiment results show improvements of BLEURT in different latency ranges, and our analyses revealed the proposed model generates SI-style outputs more than the baseline.
@@ -580,7 +581,7 @@
Speech Translation with Foundation Models and Optimal Transport: UPC at IWSLT23
- IoannisTsiamasUpc
+ IoannisTsiamasUPC
GerardI. GállegoUniversitat Politcnica de Catalunya
JoseFonollosaUniversitat Politecnica de Catalunya
MartaR. Costa-jussáMeta AI
@@ -627,7 +628,7 @@
KurtMicallefUniversity of Malta
AhnafMozib SaminUniversity of Malta
AndreaDeMarcoUniversity of Malta
- Lonnekevan der PlasIdiap
+ Lonnekevan der PlasIDIAP
ClaudiaBorgUniversity of Malta
433-441
For the 2023 IWSLT Maltese Speech Translation Task, UM-DFKI jointly presents a cascade solution which achieves 0.6 BLEU. While this is the first time that a Maltese speech translation task has been released by IWSLT, this paper explores previous solutions for other speech translation tasks, focusing primarily on low-resource scenarios. Moreover, we present our method of fine-tuning XLS-R models for Maltese ASR using a collection of multi-lingual speech corpora as well as the fine-tuning of the mBART model for Maltese to English machine translation.
@@ -636,10 +637,10 @@
NVIDIA NeMo Offline Speech Translation Systems for IWSLT 2023
- OleksiiHrinchukNvidia
+ OleksiiHrinchukNVIDIA
VladimirBataevSTC-innovations Ltd
EvelinaBakhturinaNvidia
- BorisGinsburgNvidia
+ BorisGinsburgNVIDIA
442-448
This paper provides an overview of NVIDIA NeMo’s speech translation systems for the IWSLT 2023 Offline Speech Translation Task. This year, we focused on end-to-end system which capitalizes on pre-trained models and synthetic data to mitigate the problem of direct speech translation data scarcity. When trained on IWSLT 2022 constrained data, our best En->De end-to-end model achieves the average score of 31 BLEU on 7 test sets from IWSLT 2010-2020 which improves over our last year cascade (28.4) and end-to-end (25.7) submissions. When trained on IWSLT 2023 constrained data, the average score drops to 29.5 BLEU.
2023.iwslt-1.42
diff --git a/data/xml/2023.nlrse.xml b/data/xml/2023.nlrse.xml
new file mode 100644
index 0000000000..9fcd1cc7c4
--- /dev/null
+++ b/data/xml/2023.nlrse.xml
@@ -0,0 +1,149 @@
+
+
+
+
+ Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)
+ BhavanaDalvi Mishra
+ GregDurrett
+ PeterJansen
+ DaniloNeves Ribeiro
+ JasonWei
+ Association for Computational Linguistics
+ Toronto, Canada
+ June
+ 2023
+ 2023.nlrse-1
+ nlrse
+
+
+ 2023.nlrse-1.0
+ nlrse-2023-natural
+
+
+ Knowledge Graph-augmented Language Models for Complex Question Answering
+ PriyankaSenAmazon
+ SandeepMavadiaAmazon Alexa
+ AmirSaffariAmazon
+ 1-8
+ Large language models have shown impressive abilities to reason over input text, however, they are prone to hallucinations. On the other hand, end-to-end knowledge graph question answering (KGQA) models output responses grounded in facts, but they still struggle with complex reasoning, such as comparison or ordinal questions. In this paper, we propose a new method for complex question answering where we combine a knowledge graph retriever based on an end-to-end KGQA model with a language model that reasons over the retrieved facts to return an answer. We observe that augmenting language model prompts with retrieved KG facts improves performance over using a language model alone by an average of 83%. In particular, we see improvements on complex questions requiring count, intersection, or multi-hop reasoning operations.
+ 2023.nlrse-1.1
+ sen-etal-2023-knowledge
+
+
+ Exploring the Curious Case of Code Prompts
+ LiZhangUniversity of Pennsylvania
+ LiamDuganUniversity of Pennsylvania
+ HainiuXuUniversity of Pennsylvania
+ ChrisCallison-burchUniversity of Pennsylvania
+ 9-17
+ Recent work has shown that prompting language models with code-like representations of natural language leads to performance improvements on structured reasoning tasks. However, such tasks comprise only a small subset of all natural language tasks. In our work, we seek to answer whether or not code-prompting is the preferred way of interacting with language models in general. We compare code and text prompts across three popular GPT models (davinci, code-davinci-002, and text-davinci-002) on a broader selection of tasks (e.g., QA, sentiment, summarization) and find that with few exceptions, code prompts do not consistently outperform text prompts. Furthermore, we show that the style of code prompt has a large effect on performance for some (but not all) tasks and that fine-tuning on text instructions leads to better relative performance of code prompts.
+ 2023.nlrse-1.2
+ zhang-etal-2023-exploring
+
+
+ A smashed glass cannot be full: Generation of Commonsense Explanations through Prompt-based Few-shot Learning
+ AndreaZaninelloFondazione Bruno Kessler
+ BernardoMagniniFBK
+ 18-29
+ We assume that providing explanations is a process to elicit implicit knowledge in human communication, and propose a general methodology to generate commonsense explanations from pairs of semantically related sentences. We take advantage of both prompting applied to large, encoder-decoder pre-trained language models, and few-shot learning techniques, such as pattern-exploiting training. Experiments run on the e-SNLI dataset show that the proposed method achieves state-of-the-art results on the explanation generation task, with a substantial reduction of labelled data. The obtained results open new perspective on a number of tasks involving the elicitation of implicit knowledge.
+ 2023.nlrse-1.3
+ zaninello-magnini-2023-smashed
+
+
+ Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
+ NilsFeldhusGerman Research Center for Artificial Intelligence (DFKI)
+ LeonhardHennigGerman Research Center for Artificial Intelligence (DFKI)
+ MaximilianNasertGerman Research Center for Artificial Intelligence (DFKI)
+ ChristopherEbertGerman Research Center for Artificial Intelligence (DFKI)
+ RobertSchwarzenbergGerman Research Center For Artificial Intelligence (DFKI)
+ SebastianMllerQuality and Usability Lab, TU Berlin
+ 30-46
+ Saliency maps can explain a neural model’s predictions by identifying important input features. They are difficult to interpret for laypeople, especially for instances with many features. In order to make them more accessible, we formalize the underexplored task of translating saliency maps into natural language and compare methods that address two key challenges of this approach – what and how to verbalize. In both automatic and human evaluation setups, using token-level attributions from text classification tasks, we compare two novel methods (search-based and instruction-based verbalizations) against conventional feature importance representations (heatmap visualizations and extractive rationales), measuring simulatability, faithfulness, helpfulness and ease of understanding. Instructing GPT-3.5 to generate saliency map verbalizations yields plausible explanations which include associations, abstractive summarization and commonsense reasoning, achieving by far the highest human ratings, but they are not faithfully capturing numeric information and are inconsistent in their interpretation of the task. In comparison, our search-based, model-free verbalization approach efficiently completes templated verbalizations, is faithful by design, but falls short in helpfulness and simulatability. Our results suggest that saliency map verbalization makes feature attribution explanations more comprehensible and less cognitively challenging to humans than conventional representations.
+ 2023.nlrse-1.4
+ feldhus-etal-2023-saliency
+
+
+ Using Planning to Improve Semantic Parsing of Instructional Texts
+ VanyaCohenThe University of Texas at Austin
+ RaymondMooneyUniversity of Texas at Austin
+ 47-58
+ We develop a symbolic planning-based decoder to improve the few-shot semantic parsing of instructional texts. The system takes long-form instructional texts as input and produces sequences of actions in a formal language that enable execution of the instructions. This task poses unique challenges since input texts may contain long context dependencies and ambiguous and domain-specific language. Valid semantic parses also require sequences of steps that constitute an executable plan. We build on recent progress in semantic parsing by leveraging large language models to learn parsers from small amounts of training data. During decoding, our method employs planning methods and domain information to rank and correct candidate parses. To validate our method, we evaluate on four domains: two household instruction-following domains and two cooking recipe interpretation domains. We present results for few-shot semantic parsing using leave-one-out cross-validation. We show that utilizing planning domain information improves the quality of generated plans. Through ablations we also explore the effects of our decoder design choices.
+ 2023.nlrse-1.5
+ cohen-mooney-2023-using
+
+
+ Reasoning Circuits: Few-shot Multi-hop Question Generation with Structured Rationales
+ SaurabhKulshreshthaUniversity of Massachusetts Lowell
+ AnnaRumshiskyUniversity of Massachusetts Lowell
+ 59-77
+ Multi-hop Question Generation is the task of generating questions which require the reader to reason over and combine information spread across multiple passages employing several reasoning steps. Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks and make model predictions more interpretable. However, few-shot performance gains from including rationales have been largely observed only in +100B language models, and otherwise require large-scale manual rationale annotation. In this paper, we introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime (8- to 128-shot). We propose to annotate a small number of examples following our proposed multi-step rationale schema, treating each reasoning step as a separate task to be performed by a generative language model. We show that our framework leads to improved control over the difficulty of the generated questions and better performance compared to baselines trained without rationales, both on automatic evaluation metrics and in human evaluation. Importantly, we show that this is achievable with a modest model size.
+ 2023.nlrse-1.6
+ kulshreshtha-rumshisky-2023-reasoning
+
+
+ Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering
+ JinheonBaekKorea Advanced Institute of Science and Technology
+ Alham FikriAjiMBZUAI
+ AmirSaffariAmazon
+ 78-106
+ Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks, based on their internal knowledge stored in parameters during pre-training. However, such internalized knowledge might be insufficient and incorrect, which could lead LLMs to generate factually wrong answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive. To this end, we propose to augment the knowledge directly in the input of LLMs. Specifically, we first retrieve the relevant facts to the input question from the knowledge graph based on semantic similarities between the question and its associated facts. After that, we prepend the retrieved facts to the input question in the form of the prompt, which is then forwarded to LLMs to generate the answer. Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot. We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user’s question based on facts over a knowledge graph, on which ours outperforms relevant zero-shot baselines by up to 48% in average, across multiple LLMs of various sizes.
+ 2023.nlrse-1.7
+ baek-etal-2023-knowledge
+
+
+ Can In-context Learners Learn a Reasoning Concept from Demonstrations?
+ MichalTefnikMasaryk University
+ MarekKadlcikFaculty of Informatics, Masaryk University
+ 107-115
+ Large language models show an emergent ability to learn a new task from a small number of input-output demonstrations.However, recent work shows that in-context learners largely rely on their pre-trained knowledge, such as the sentiment of the labels, instead of finding new associations in the input.However, the commonly-used few-shot evaluation settings using a random selection of in-context demonstrations can not disentangle models’ ability to learn a new skill from demonstrations, as most of the randomly-selected demonstrations do not present relations informative for prediction beyond exposing the new task distribution.To disentangle models’ in-context learning ability independent of models’ memory, we introduce a Conceptual few-shot learning method selecting the demonstrations sharing a possibly-informative concept with the predicted sample. We extract a set of such concepts from annotated explanations and measure how much can models benefit from presenting these concepts in few-shot demonstrations.We find that smaller models are more sensitive to the presented concepts. While some of the models are able to benefit from concept-presenting demonstrations for each assessed concept, we find that none of the assessed in-context learners can benefit from all presented reasoning concepts consistently, leaving the in-context concept learning an open challenge.
+ 2023.nlrse-1.8
+ tefnik-kadlcik-2023-context
+
+
+ Effect Graph: Effect Relation Extraction for Explanation Generation
+ JonathanKobbeUniversity of Mannheim
+ IoanaHulpuData and Web Science Group, University of Mannheim
+ HeinerStuckenschmidtUniversity of Mannheim
+ 116-127
+ Argumentation is an important means of communication. For describing especially arguments about consequences, the notion of effect relations has been introduced recently. We propose a method to extract effect relations from large text resources and apply it on encyclopedic and argumentative texts. By connecting the extracted relations, we generate a knowledge graph which we call effect graph. For evaluating the effect graph, we perform crowd and expert annotations and create a novel dataset. We demonstrate a possible use case of the effect graph by proposing a method for explaining arguments from consequences.
+ 2023.nlrse-1.9
+ kobbe-etal-2023-effect
+
+
+ OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models
+ BadrAlkhamissiMeta AI
+ SiddharthVermaSquare
+ PingYuUniversity at Buffalo
+ ZhijingJinMax Planck Institute & ETH Zurich
+ AsliCelikyilmazFAIR @ Meta
+ MonaDiabMeta Responsible AI
+ 128-138
+ We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model’s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4%) and Analogical (+13.9%) reasoning, as well as skills that exhibit negligible or negative effects.
+ 2023.nlrse-1.10
+ alkhamissi-etal-2023-opt
+
+
+ Deductive Additivity for Planning of Natural Language Proofs
+ ZayneSpragueUniversity of Texas at Austin
+ KajBostromUniversity of Texas at Austin
+ SwaratChaudhuriUT Austin
+ GregDurrettUT Austin
+ 139-156
+ Current natural language systems designed for multi-step claim validation typically operate in two phases: retrieve a set of relevant premise statements using heuristics (planning), then generate novel conclusions from those statements using a large language model (deduction). The planning step often requires expensive Transformer operations and does not scale to arbitrary numbers of premise statements. In this paper, we investigate whether efficient planning heuristic is possible via embedding spaces compatible with deductive reasoning. Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises. We explore multiple sources of off-the-shelf dense embeddings in addition to fine-tuned embeddings from GPT3 and sparse embeddings from BM25. We study embedding models both intrinsically, evaluating whether the property of deductive additivity holds, and extrinsically, using them to assist planning in natural language proof generation. Lastly, we create a dataset, Single-Step Reasoning Contrast (SSRC), to further probe performance on various reasoning types. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effective heuristics and lack the ability to model certain categories of reasoning.
+ 2023.nlrse-1.11
+ sprague-etal-2023-deductive
+
+
+ Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference
+ Sushma AnandAkojuUniversity of Arizona
+ RobertVacareanuUniversity of Arizona
+ EduardoBlancoUniversity of Arizona
+ HarisRiazUniversity of Arizona
+ MihaiSurdeanuUniversity of Arizona
+ 157-168
+ We introduce a synthetic dataset called Sentences Involving Complex Compositional Knowledge (SICCK) and a novel analysis that investigates the performance of Natural Language Inference (NLI) models to understand compositionality in logic. We produce 1,304 sentence pairs by modifying 15 examples from the SICK dataset (Marelli et al., 2014). To this end, we modify the original texts using a set of phrases modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) (MacCartney, 2009). We use these phrases to modify the subject, verb, and object parts of the premise and hypothesis. Lastly, we annotate these modified texts with the corresponding entailment labels following NL rules. We conduct a preliminary verification of how well the change in the structural and semantic composition is captured by neural NLI models, in both zero-shot and fine-tuned scenarios. We found that the performance of NLI models under the zero-shot setting is poor, especially for modified sentences with negation and existential quantifiers. After fine-tuning this dataset, we observe that models continue to perform poorly over negation, existential and universal modifiers.
+ 2023.nlrse-1.12
+ akoju-etal-2023-synthetic
+
+
+
diff --git a/data/xml/2023.repl4nlp.xml b/data/xml/2023.repl4nlp.xml
new file mode 100644
index 0000000000..05fb0738ff
--- /dev/null
+++ b/data/xml/2023.repl4nlp.xml
@@ -0,0 +1,307 @@
+
+
+
+
+ Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)
+ BurcuCanUniversity of Stirling
+ MaximilianMozesUniversity College London
+ SamuelCahyawijayaHong Kong University of Science and Technology
+ NaomiSaphraNew York University
+ NoraKassnerMeta
+ ShauliRavfogelBar-Ilan University
+ AbhilashaRavichanderAllen Institute for Artificial Intelligence
+ ChenZhaoNew York University
+ IsabelleAugensteinUniversity of Copenhagen
+ AnnaRogersUniversity of Copenhagen
+ KyunghyunChoNew York University
+ EdwardGrefenstetteDeepMind
+ LenaVoitaMeta AI
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ repl4nlp
+
+
+ 2023.repl4nlp-1.0
+ repl4nlp-2023-representation
+
+
+ Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems
+ AshimGupta
+ AmrithKrishnaUniversity of Cambridge
+ 1-12
+ Clean-label (CL) attack is a form of data poisoning attack where an adversary modifies only the textual input of the training data, without requiring access to the labeling function. CL attacks are relatively unexplored in NLP, as compared to label flipping (LF) attacks, where the latter additionally requires access to the labeling function as well. While CL attacks are more resilient to data sanitization and manual relabeling methods than LF attacks, they often demand as high as ten times the poisoning budget than LF attacks. In this work, we first introduce an Adversarial Clean Label attack which can adversarially perturb in-class training examples for poisoning the training set. We then show that an adversary can significantly bring down the data requirements for a CL attack, using the aforementioned approach, to as low as 20 % of the data otherwise required. We then systematically benchmark and analyze a number of defense methods, for both LF and CL attacks, some previously employed solely for LF attacks in the textual domain and others adapted from computer vision. We find that text-specific defenses greatly vary in their effectiveness depending on their properties.
+ 2023.repl4nlp-1.1
+ gupta-krishna-2023-adversarial
+
+
+ Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
+ ShahriarGolchinUniversity of Arizona
+ MihaiSurdeanuUniversity of Arizona
+ NazgolTavabiHarvard University
+ AtaKiapourHarvard University
+ 13-21
+ We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks in-domain keywords, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15% of the pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).
+ 2023.repl4nlp-1.2
+ golchin-etal-2023-mask
+
+
+ Grammatical information in BERT sentence embeddings as two-dimensional arrays
+ ViviNastaseUniversity of Geneva
+ PaolaMerloUppsala University and University of Geneva, Switzerland
+ 22-39
+ Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as one-dimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing few-shot learning approaches.
+ 2023.repl4nlp-1.3
+ nastase-merlo-2023-grammatical
+
+
+ A Multilingual Evaluation of NER Robustness to Adversarial Inputs
+ AkshaySrinivasan
+ SowmyaVajjalaNational Research Council Canada
+ 40-53
+ Adversarial evaluations of language models typically focus on English alone. In this paper, we performed a multilingual evaluation of Named Entity Recognition (NER) in terms of its robustness to small perturbations in the input. Our results showed the NER models we explored across three languages (English, German and Hindi) are not very robust to such changes, as indicated by the fluctuations in the overall F1 score as well as in a more fine-grained evaluation. With that knowledge, we further explored whether it is possible to improve the existing NER models using a part of the generated adversarial data sets as augmented training data to train a new NER model or as fine-tuning data to adapt an existing NER model. Our results showed that both these approaches improve performance on the original as well as adversarial test sets. While there is no significant difference between the two approaches for English, re-training is significantly better than fine-tuning for German and Hindi.
+ 2023.repl4nlp-1.4
+ srinivasan-vajjala-2023-multilingual
+
+
+ Retrieval-Augmented Domain Adaptation of Language Models
+ BenfengXu
+ ChunxuZhaoBeijing Language and Culture University
+ WenbinJiang
+ PengFeiZhuBaidu
+ SongtaiDaiBaidu
+ ChaoPangBaidu
+ ZhuoSunBaidu
+ ShuohuanWang
+ YuSun
+ 54-64
+ Language models pretrained on general domain corpora usually exhibit considerable degradation when generalizing to downstream tasks of specialized domains. Existing approaches try to construct PLMs for each specific domains either from scratch or through further pretraining, which not only costs substantial resources, but also fails to cover all target domains at various granularity. In this work, we propose RADA, a novel Retrieval-Augmented framework for Domain Adaptation. We first construct a textual corpora that covers the downstream task at flexible domain granularity and resource availability. We employ it as a pluggable datastore to retrieve informative background knowledge, and integrate them into the standard language model framework to augment representations. We then propose a two-level selection scheme to integrate the most relevant information while alleviating irrelevant noises. Specifically, we introduce a differentiable sampling module as well as an attention mechanism to achieve both passage-level and word-level selection. Such a retrieval-augmented framework enables domain adaptation of language models with flexible domain coverage and fine-grained domain knowledge integration. We conduct comprehensive experiments across biomedical, science and legal domains to demonstrate the effectiveness of the overall framework, and its advantage over existing solutions.
+ 2023.repl4nlp-1.5
+ xu-etal-2023-retrieval
+
+
+ Fine-grained Text Style Transfer with Diffusion-Based Language Models
+ YiweiLyu
+ TiangeLuoUniversity of Michigan - Ann Arbor
+ JiachengShi
+ ToddHollonUniversity of Michigan
+ HonglakLeeLG AI Research and University of Michigan
+ 65-74
+ Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain. Previous works on diffusion-based language models have shown that they can be trained without external knowledge (such as pre-trained weights) and still achieve stable performance and controllability. In this paper, we trained a diffusion-based model on StylePTB dataset, the standard benchmark for fine-grained text style transfers. The tasks in StylePTB requires much more refined control over the output text compared to tasks evaluated in previous works, and our model was able to achieve state-of-the-art performance on StylePTB on both individual and compositional transfers. Moreover, our model, trained on limited data from StylePTB without external knowledge, outperforms previous works that utilized pretrained weights, embeddings, and external grammar parsers, and this may indicate that diffusion-based language models have great potential under low-resource settings.
+ 2023.repl4nlp-1.6
+ lyu-etal-2023-fine
+
+
+ Enhancing text comprehension for Question Answering with Contrastive Learning
+ SeungyeonLeeKyungpook National University
+ MinhoLeeKyungpook National University
+ 75-86
+ Although Question Answering (QA) have advanced to the human-level language skills in NLP tasks, there is still a problem: the QA model gets confused when there are similar sentences or paragraphs. Existing studies focus on enhancing the text understanding of the candidate answers to improve the overall performance of the QA models. However, since these methods focus on re-ranking queries or candidate answers, they fail to resolve the confusion when many generated answers are similar to the expected answer. To address these issues, we propose a novel contrastive learning framework called ContrastiveQA that alleviates the confusion problem in answer extraction. We propose a supervised method where we generate positive and negative samples from the candidate answers and the given answer, respectively. We thus introduce ContrastiveQA, which uses contrastive learning with sampling data to reduce incorrect answers. Experimental results on four QA benchmarks show the effectiveness of the proposed method.
+ 2023.repl4nlp-1.7
+ lee-lee-2023-enhancing
+
+
+ Towards Flow Graph Prediction of Open-Domain Procedural Texts
+ KeisukeShirai
+ HirotakaKamekoBaidu
+ ShinsukeMoriKyoto University
+ 87-96
+ Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data.
+ 2023.repl4nlp-1.8
+ shirai-etal-2023-towards
+
+
+ One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks
+ GregorGeigleBayerische Julius-Maximilians-Universität Würzburg
+ ChenLiu
+ JonasPfeifferGoogle
+ IrynaGurevychTU Darmstadt
+ 97-117
+ Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs—of different architectures, trained on different data and objectives—are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a single pre-trained VE can serve as a general-purpose encoder. In this work, we focus on analysis and aim to understand whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task, and how they are combined. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not repurposed, but explicitly designed for V+L tasks, have the potential of improving performance on the target V+L tasks.
+ 2023.repl4nlp-1.9
+ geigle-etal-2023-one
+
+
+ SPC: Soft Prompt Construction for Cross Domain Generalization
+ WenboZhaoAmazon
+ ArpitGuptaAmazon
+ TagyoungChungAmazon
+ JingHuangAmazon Alexa AI
+ 118-130
+ Recent advances in prompt tuning have proven effective as a new language modeling paradigm for various natural language understanding tasks. However, it is challenging to adapt the soft prompt embeddings to different domains or generalize to low-data settings when learning soft prompts itself is unstable, task-specific, and bias-prone. This paper proposes a principled learning framework—soft prompt construction (SPC)—to facilitate learning domain-adaptable soft prompts. Derived from the SPC framework is a simple loss that can plug into various models and tuning approaches to improve their cross-domain performance. We show SPC can improve upon SOTA for contextual query rewriting, summarization, and paraphrase detection by up to 5%, 19%, and 16%, respectively.
+ 2023.repl4nlp-1.10
+ zhao-etal-2023-spc
+
+
+ Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction
+ AdrianKochsiekUniversität Mannheim
+ ApoorvSaxenaIndian Institute of Science, Bangalore
+ InderjeetNairAdobe Systems
+ RainerGemullaUniversität Mannheim, Germany
+ 131-138
+ We propose KGT5-context, a simple sequence-to-sequence model for link prediction (LP) in knowledge graphs (KG). Our work expands on KGT5, a recent LP model that exploits textual features of the KG, has small model size, and is scalable. To reach good predictive performance, however, KGT5 relies on an ensemble with a knowledge graph embedding model, which itself is excessively large and costly to use. In this short paper, we show empirically that adding contextual information — i.e., information about the direct neighborhood of the query entity — alleviates the need for a separate KGE model to obtain good performance. The resulting KGT5-context model is simple, reduces model size significantly, and obtains state-of-the-art performance in our experimental study.
+ 2023.repl4nlp-1.11
+ kochsiek-etal-2023-friendly
+
+
+ Extracting Multi-valued Relations from Language Models
+ SnehaSinghaniaSaarland Informatics Campus, Max-Planck Institute for Informatics
+ SimonRazniewskiSaarland Informatics Campus, Max-Planck Institute
+ GerhardWeikumMax Planck Institute and Max-Planck Institute for Informatics
+ 139-154
+ The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5% F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations.
+ 2023.repl4nlp-1.12
+ singhania-etal-2023-extracting
+
+
+ Hierarchical Multi-Instance Multi-Label Learning for Detecting Propaganda Techniques
+ AnniChen
+ BhuwanDhingra
+ 155-163
+ Since the introduction of the SemEval 2020 Task 11 (CITATION), several approaches have been proposed in the literature for classifying propagandabased on the rhetorical techniques used to influence readers.These methods, however, classify one span at a time, ignoring dependencies from the labels of other spans within the same context.In this paper, we approach propaganda technique classification as aMulti-Instance Multi-Label (MIML) learning problem (CITATION) and propose a simple RoBERTa-based model (CITATION) for classifying all spans in an article simultaneously. Further, we note that, due to the annotation process whereannotators classified the spans by following a decision tree,there is an inherent hierarchical relationship among the differenttechniques, which existing approaches ignore. We incorporate these hierarchical label dependencies by adding an auxiliary classifier for each node in the decision tree to the training objective and ensembling the predictions from the original and auxiliary classifiers at test time. Overall, our model leads to an absolute improvement of 2.47% micro-F1 over the model from the shared task winning team in a cross-validation setup and is the best performing non-ensemble model on the shared task leaderboard.
+ 2023.repl4nlp-1.13
+ chen-dhingra-2023-hierarchical
+
+
+ Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
+ NarutatsuRi
+ Fei-TzinLeeColumbia University
+ NakulVermaColumbia University
+ 164-173
+ While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.
+ 2023.repl4nlp-1.14
+ ri-etal-2023-contrastive
+
+
+ Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling
+ AlirezaMohammadshahi
+ JamesHendersonIdiap Research Institute
+ 174-186
+ Recent models have shown that incorporating syntactic knowledge into the semantic role labelling (SRL) task leads to a significant improvement. In this paper, we propose Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) model, which encodes the syntactic structure using a novel way to input graph relations as embeddings, directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both span-based and dependency-based SRL datasets, and outperform previous alternative methods in both in-domain and out-of-domain settings, on CoNLL 2005 and CoNLL 2009 datasets.
+ 2023.repl4nlp-1.15
+ mohammadshahi-henderson-2023-syntax
+
+
+ Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates
+ MahdiRahimiComputer Science Department, University of Arizona
+ MihaiSurdeanuUniversity of Arizona
+ 187-195
+ While fully supervised relation classification (RC) models perform well on large-scale datasets, their performance drops drastically in low-resource settings. As generating annotated examples are expensive, recent zero-shot methods have been proposed that reformulate RC into other NLP tasks for which supervision exists such as textual entailment. However, these methods rely on templates that are manually created which is costly and requires domain expertise. In this paper, we present a novel strategy for template generation for relation classification, which is based on adapting Harris’ distributional similarity principle to templates encoded using contextualized representations. Further, we perform empirical evaluation of different strategies for combining the automatically acquired templates with manual templates. The experimental results on TACRED show that our approach not only performs better than the zero-shot RC methods that only use manual templates, but also that it achieves state-of-the-art performance for zero-shot TACRED at 64.3 F1 score.
+ 2023.repl4nlp-1.16
+ rahimi-surdeanu-2023-improving
+
+
+ MUX-PLMs: Pre-training Language Models with Data Multiplexing
+ VishvakMurahariPrinceton University
+ AmeetDeshpande
+ CarlosJimenez
+ IzhakShafranGoogle
+ MingqiuWang
+ YuanCaoGoogle Brain
+ KarthikNarasimhanPrinceton University
+ 196-211
+ The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towards high throughput and performance. Multi-input multi-output (MIMO) algorithms such as data multiplexing, offer a promising solution with a many-fold increase in throughput by performing inference for multiple inputs at the cost of a single input. Yet these approaches are not currently performant enough to be deployed in modern systems. We change that by developing MUX-PLMs, a class of high throughput pre-trained language models (PLMs) trained with data multiplexing, that can be fine-tuned for any downstream task to yield high-throughput high-performance. Our novel multiplexing and demultiplexing modules proficiently entangle and disentangle inputs, and enable high-performance high throughput that are competitive with vanilla PLMs while achieving 2x/5x inference speedup with only a 1−4% drop on a broad suite of tasks.
+ 2023.repl4nlp-1.17
+ murahari-etal-2023-mux
+
+
+ Mixed Orthographic/Phonemic Language Modeling: Beyond Orthographically Restricted Transformers (BORT)
+ RobertGaleOregon Health Sciences University
+ AlexandraSalemOregon Health Sciences University
+ GerasimosFergadiotisPortland State University
+ StevenBedrickOregon Health & Science University
+ 212-225
+ Speech language pathologists rely on information spanning the layers of language, often drawing from multiple layers (e.g. phonology & semantics) at once. Recent innovations in large language models (LLMs) have been shown to build powerful representations for many complex language structures, especially syntax and semantics, unlocking the potential of large datasets through self-supervised learning techniques. However, these datasets are overwhelmingly orthographic, favoring writing systems like the English alphabet, a natural but phonetically imprecise choice. Meanwhile, LLM support for the international phonetic alphabet (IPA) ranges from poor to absent. Further, LLMs encode text at a word- or near-word level, and pre-training tasks have little to gain from phonetic/phonemic representations. In this paper, we introduce BORT, an LLM for mixed orthography/IPA meant to overcome these limitations. To this end, we extend the pre-training of an existing LLM with our own self-supervised pronunciation tasks. We then fine-tune for a clinical task that requires simultaneous phonological and semantic analysis. For an “easy” and “hard” version of these tasks, we show that fine-tuning from our models is more accurate by a relative 24% and 29%, and improved on character error rates by a relative 75% and 31%, respectively, than those starting from the original model.
+ 2023.repl4nlp-1.18
+ gale-etal-2023-mixed
+
+
+ Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data
+ StephenObadinmaQueen’s University
+ HongyuGuo
+ XiaodanZhuQueen’s University
+ 226-237
+ Recent work has demonstrated that using parameter efficient tuning techniques such as prefix tuning (or P-tuning) on pretrained language models can yield performance that is comparable or superior to fine-tuning while dramatically reducing trainable parameters. Nevertheless, the effectiveness of such methods under the context of data augmentation, a common strategy to improve learning under low data regimes, has not been fully explored. In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i.e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity. We show that data augmentation can be used to boost the performance of P-tuning and LoRA models, but the effectiveness of each technique varies and certain methods can lead to a notable degradation in performance, particularly when using larger models and on harder tasks. We further analyze the sentence representations of P-tuning compared to fine-tuning to help understand the above behaviour, and reveal how P-tuning generally presents a more limited ability to separate the sentence embeddings from different classes of augmented data. In addition, it displays poorer performance on heavily altered data. However, we demonstrate that by adding a simple contrastive loss function it can help mitigate such issues for prefix tuning, resulting in sizable improvements to augmented data performance.
+ 2023.repl4nlp-1.19
+ obadinma-etal-2023-effectiveness
+
+
+ Relational Sentence Embedding for Flexible Semantic Matching
+ BinWangNational University of Singapore, Singapore
+ HaizhouLiNational University of Singapore, Singapore and School of Data Science, The Chinese University of Hong Kong, Shenzhen, China and Shenzhen Research Institute of Big Data
+ 238-252
+ 2023.repl4nlp-1.20
+ wang-li-2023-relational
+
+
+ Tucker Decomposition with Frequency Attention for Temporal Knowledge Graph Completion
+ LikangXiaoSKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China and Shen Yuan Honors College, Beihang University, Beijing, China
+ RichongZhangSKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China
+ ZijieChenSchool of Electrical and Computer Engineering, University of Toronto, Toronto, Canada
+ JunfanChenSKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China
+ 253-265
+ 2023.repl4nlp-1.21
+ xiao-etal-2023-tucker-decomposition
+
+
+ CLIP-based image captioning via unsupervised cycle-consistency in the latent space
+ RomainBielawskiANITI, Université de Toulouse, France
+ RufinVanRullenCerCo, CNRS UMR5549, Toulouse
+ 266-275
+ 2023.repl4nlp-1.22
+ bielawski-vanrullen-2023-clip
+
+
+ Token-level Fitting Issues of Seq2seq Models
+ GuangshengBaoZhejiang University and School of Engineering, Westlake University
+ ZhiyangTengNanyang Technological University
+ YueZhangSchool of Engineering, Westlake University and Institute of Advanced Technology, Westlake Institute for Advanced Study
+ 276-288
+ 2023.repl4nlp-1.23
+ bao-etal-2023-token
+
+
+ Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
+ Cheng-HanChiangNational Taiwan University†
+ Hung-yiLeeNational Taiwan University†
+ Yung-SungChuangMassachusetts Institute of Technology
+ JamesGlassMassachusetts Institute of Technology
+ 289-302
+ 2023.repl4nlp-1.24
+ chiang-etal-2023-revealing
+
+
+ One-Shot Exemplification Modeling via Latent Sense Representations
+ JohnHarvillUniversity of Illinois Urbana-Champaign
+ MarkHasegawa-JohnsonUniversity of Illinois Urbana-Champaign
+ Hee SukYoonKorea Advanced Institute of Science and Technology
+ Chang D.YooKorea Advanced Institute of Science and Technology
+ EunseopYoonKorea Advanced Institute of Science and Technology
+ 303-314
+ 2023.repl4nlp-1.25
+ harvill-etal-2023-one
+
+
+ Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model
+ LingfengShenTencent AI Lab
+ HaiyunJiangTencent AI Lab
+ LemaoLiuTencent AI Lab
+ ShumingShiTencent AI Lab
+ 315-333
+ 2023.repl4nlp-1.26
+ shen-etal-2023-sen2pro
+
+
+ Visual Coherence Loss for Coherent and Visually Grounded Story Generation
+ XudongHongMPI Informatics and Saarland University and Saarland Informatics Campus
+ VeraDembergSaarland University and Saarland Informatics Campus
+ AsadSayeedUniversity of Gothenburg
+ QiankunZhengSaarland University and Saarland Informatics Campus
+ BerntSchieleMPI Informatics and Saarland Informatics Campus
+ 334-346
+ 2023.repl4nlp-1.27
+ hong-etal-2023-visual-coherence
+
+
+
diff --git a/data/xml/2023.semeval.xml b/data/xml/2023.semeval.xml
index ddb13649de..2993e04b5a 100644
--- a/data/xml/2023.semeval.xml
+++ b/data/xml/2023.semeval.xml
@@ -15,6 +15,10 @@
2023
semeval
+
+ 2023.semeval-1.0
+ semeval-2023-international
+
KnowComp at SemEval-2023 Task 7: Fine-tuning Pre-trained Language Models for Clinical Trial Entailment Identification
WeiqiWangHong Kong University of Science and Technology
diff --git a/data/xml/2023.sicon.xml b/data/xml/2023.sicon.xml
new file mode 100644
index 0000000000..f3dd03b204
--- /dev/null
+++ b/data/xml/2023.sicon.xml
@@ -0,0 +1,101 @@
+
+
+
+
+ Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)
+ KushalChawla
+ WeiyanShi
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.sicon-1
+ sicon
+
+
+ 2023.sicon-1.0
+ sicon-2023-social
+
+
+ Eliciting Rich Positive Emotions in Dialogue Generation
+ ZiweiGongColumbia University
+ QingkaiMin
+ YueZhangWestlake University
+ 1-8
+ Positive emotion elicitation aims at evoking positive emotion states in human users in open-domain dialogue generation. However, most work focuses on inducing a single-dimension of positive sentiment using human annotated datasets, which limits the scale of the training dataset. In this paper, we propose to model various emotions in large unannotated conversations, such as joy, trust and anticipation, by leveraging a latent variable to control the emotional intention of the response. Our proposed emotion-eliciting-Conditional-Variational-AutoEncoder (EE-CVAE) model generates more diverse and emotionally-intelligent responses compared to single-dimension baseline models in human evaluation.
+ 2023.sicon-1.1
+ gong-etal-2023-eliciting
+
+
+ Detoxifying Online Discourse: A Guided Response Generation Approach for Reducing Toxicity in User-Generated Text
+ RitwikBoseKnox College
+ IanPereraThe Institute for Human & Machine Cognition
+ BonnieDorrUniversity of Florida
+ 9-14
+ The expression of opinions, stances, and moral foundations on social media often coincide with toxic, divisive, or inflammatory language that can make constructive discourse across communities difficult. Natural language generation methods could provide a means to reframe or reword such expressions in a way that fosters more civil discourse, yet current Large Language Model (LLM) methods tend towards language that is too generic or formal to seem authentic for social media discussions. We present preliminary work on training LLMs to maintain authenticity while presenting a community’s ideas and values in a constructive, non-toxic manner.
+ 2023.sicon-1.2
+ bose-etal-2023-detoxifying
+
+
+ Large Language Models respond to Influence like Humans
+ LewisGriffinUniversity College London, University of London
+ BennettKleinbergTilburg University
+ MaximilianMozes
+ KimberlyMaiUniversity College London, University of London
+ Maria Do MarVau
+ MatthewCaldwellNA
+ AugustineMavor-Parker
+ 15-24
+ Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement boosts a later truthfulness test rating. Analysis of newly collected data from human and LLM-simulated subjects (1000 of each) showed the same pattern of effects in both populations; although with greater per statement variability for the LLM. The second study concerns a specific mode of influence – populist framing of news to increase its persuasion and political mobilization. Newly collected data from simulated subjects was compared to previously published data from a 15 country experiment on 7286 human participants. Several effects from the human study were replicated by the simulated study, including ones that surprised the authors of the human study by contradicting their theoretical expectations; but some significant relationships found in human data were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.
+ 2023.sicon-1.3
+ griffin-etal-2023-large
+
+
+ What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text
+ KathleenFraserNational Research Council Canada
+ SvetlanaKiritchenkoNational Research Council Canada
+ IsarNejadgholi
+ AnnaKerkhof
+ 25-38
+ When harmful social stereotypes are expressed on a public platform, they must be addressed in a way that educates and informs both the original poster and other readers, without causing offence or perpetuating new stereotypes. In this paper, we synthesize findings from psychology and computer science to propose a set of potential counter-stereotype strategies. We then automatically generate such counter-stereotypes using ChatGPT, and analyze their correctness and expected effectiveness at reducing stereotypical associations. We identify the strategies of denouncing stereotypes, warning of consequences, and using an empathetic tone as three promising strategies to be further tested.
+ 2023.sicon-1.4
+ fraser-etal-2023-makes
+
+
+ BCause: Reducing group bias and promoting cohesive discussion in online deliberation processes through a simple and engaging online deliberation tool
+ LucasAnastasiou
+ AnnaDe LibboNA
+ 39-49
+ Facilitating healthy online deliberation in terms of sensemaking and collaboration of discussion participants proves extremely challenging due to a number of known negative effects of online communication on social media platforms. We start from concerns and aspirations about the use of existing online discussion systems as distilled in previous literature, we then combine them with lessons learned on design and engineering practices from our research team, to inform the design of an easy-to-use tool (BCause.app) that enables higher quality discussions than traditional social media. We describe the design of this tool, highlighting the main interaction features that distinguish it from common social media, namely: i. the low-cost argumentation structuring of the conversations with direct replies; ii. and the distinctive use of reflective feedback rather than appreciative-only feedback. We then present the results of a controlled A/B experiment in which we show that the presence of argumentative and cognitive reflective discussion elements produces better social interaction with less polarization and promotes a more cohesive discussion than common social media-like interactions.
+ 2023.sicon-1.5
+ anastasiou-de-libbo-2023-bcause
+
+
+ Measuring Lexico-Semantic Alignment in Debates with Contextualized Word Representations
+ AinaGarí SolerTélécom-Paris
+ MatthieuLabeauTélécom ParisTech
+ ChloéClavelTélécom ParisTech and Télécom Paris
+ 50-63
+ Dialog participants sometimes align their linguistic styles, e.g., they use the same words and syntactic constructions as their interlocutors. We propose to investigate the notion of lexico-semantic alignment: to what extent do speakers convey the same meaning when they use the same words? We design measures of lexico-semantic alignment relying on contextualized word representations. We show that they reflect interesting semantic differences between the two sides of a debate and that they can assist in the task of debate’s winner prediction.
+ 2023.sicon-1.6
+ gari-soler-etal-2023-measuring
+
+
+ Exploring Linguistic Style Matching in Online Communities: The Role of Social Context and Conversation Dynamics
+ AparnaAnanthasubramaniam
+ HongChen
+ JasonYanUniversity of Michigan - Ann Arbor
+ KenanAlkiek
+ JiaxinPeiUniversity of Michigan
+ AgrimaSethUniversity of Michigan
+ LaviniaDunagan
+ MinjeChoiUniversity of Michigan
+ BenjaminLittererNA
+ DavidJurgensUniversity of Michigan
+ 64-74
+ Linguistic style matching (LSM) in conversations can be reflective of several aspects of social influence such as power or persuasion. However, how LSM relates to the outcomes of online communication on platforms such as Reddit is an unknown question. In this study, we analyze a large corpus of two-party conversation threads in Reddit where we identify all occurrences of LSM using two types of style: the use of function words and formality. Using this framework, we examine how levels of LSM differ in conversations depending on several social factors within Reddit: post and subreddit features, conversation depth, user tenure, and the controversiality of a comment. Finally, we measure the change of LSM following loss of status after community banning. Our findings reveal the interplay of LSM in Reddit conversations with several community metrics, suggesting the importance of understanding conversation engagement when understanding community dynamics.
+ 2023.sicon-1.7
+ ananthasubramaniam-etal-2023-exploring
+
+
+
diff --git a/data/xml/2023.sigmorphon.xml b/data/xml/2023.sigmorphon.xml
new file mode 100644
index 0000000000..c3ceb28079
--- /dev/null
+++ b/data/xml/2023.sigmorphon.xml
@@ -0,0 +1,306 @@
+
+
+
+
+ Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology
+ GarrettNicolai
+ EleanorChodroff
+ FredericMailhot
+ ÇağrıÇöltekin
+ Association for Computational Linguistics
+ Toronto, Canada
+ July
+ 2023
+ 2023.sigmorphon-1
+ sigmorphon
+
+
+ 2023.sigmorphon-1.0
+ sigmorphon-2023-sigmorphon
+
+
+ Translating a low-resource language using GPT-3 and a human-readable dictionary
+ MichaElsnerThe Ohio State University
+ JordanNeedleThe Ohio State University
+ 1-13
+ We investigate how well words in the polysynthetic language Inuktitut can be translated by combining dictionary definitions, without use of a neural machine translation model trained on parallel text. Such a translation system would allow natural language technology to benefit from resources designed for community use in a language revitalization or education program, rather than requiring a separate parallel corpus. We show that the text-to-text generation capabilities of GPT-3 allow it to perform this task with BLEU scores of up to 18.5. We investigate prompting GPT-3 to provide multiple translations, which can help slightly, and providing it with grammar information, which is mostly ineffective. Finally, we test GPT-3’s ability to derive morpheme definitions from whole-word translations, but find this process is prone to errors including hallucinations.
+ 2023.sigmorphon-1.2
+ elsner-needle-2023-translating
+
+
+ Evaluating Cross Lingual Transfer for Morphological Analysis: a Case Study of Indian Languages
+ SiddheshPawarGoogle
+ PushpakBhattacharyyaIndian Institute of Technology Bombay and Patna
+ ParthaTalukdarGoogle Research and IISc
+ 14-26
+ Recent advances in pretrained multilingual models such as Multilingual T5 (mT5) have facilitated cross-lingual transfer by learning shared representations across languages. Leveraging pretrained multilingual models for scaling morphology analyzers to low-resource languages is a unique opportunity that has been under-explored so far. We investigate this line of research in the context of Indian languages, focusing on two important morphological sub-tasks: root word extraction and tagging morphosyntactic descriptions (MSD), viz., gender, number, and person (GNP). We experiment with six Indian languages from two language families (Dravidian and Indo-Aryan) to train a multilingual morphology analyzers for the first time for Indian languages. We demonstrate the usability of multilingual models for few-shot cross-lingual transfer through an average 7% increase in GNP tagging in a cross-lingual setting as compared to a monolingual setting through controlled experiments. We provide an overview of the state of the datasets available related to our tasks and point-out a few modeling limitations due to datasets. Lastly, we analyze the cross-lingual transfer of morphological tags for verbs and nouns, which provides a proxy for the quality of representations of word markings learned by the model.
+ 2023.sigmorphon-1.3
+ pawar-etal-2023-evaluating
+
+
+ Joint Learning Model for Low-Resource Agglutinative Language Morphological Tagging
+ GulinigeerAbudouwailiSchool of Information Science and Engineering Xinjiang University
+ KahaerjiangAbiderexitiSchool of Information Science and Engineering, Xinjiang University
+ NianYiSchool of Information Science and Engineering Xinjiang University
+ AishanWumaierSchool of Science and Engineering, Xinjiang University; Xinjiang Provincial Key Laboratory of Multi-lingual Information Technology
+ 27-37
+ Due to the lack of data resources, rule-based or transfer learning is mainly used in the morphological tagging of low-resource languages. However, these methods require expert knowledge, ignore contextual features, and have error propagation. Therefore, we propose a joint morphological tagger for low-resource agglutinative languages to alleviate the above challenges. First, we represent the contextual input with multi-dimensional features of agglutinative words. Second, joint training reduces the direct impact of part-of-speech errors on morphological features and increases the indirect influence between the two types of labels through a fusion mechanism. Finally, our model separately predicts part-of-speech and morphological features. Part-of-speech tagging is regarded as sequence tagging. When predicting morphological features, two-label adjacency graphs are dynamically reconstructed by integrating multilingual global features and monolingual local features. Then, a graph convolution network is used to learn the higher-order intersection of labels. A series of experiments show that the proposed model in this paper is superior to other comparative models.
+ 2023.sigmorphon-1.4
+ abudouwaili-etal-2023-joint
+
+
+ Revisiting and Amending Central Kurdish Data on UniMorph 4.0
+ SinaAhmadiGeorge Mason University
+ AsoMahmudiIndependent
+ 38-48
+ UniMorph–the Universal Morphology project is a collaborative initiative to create and maintain morphological data and organize numerous related tasks for various language processing communities. The morphological data is provided by linguists for over 160 languages in the latest version of UniMorph 4.0. This paper sheds light on the Central Kurdish data on UniMorph 4.0 by analyzing the existing data, its fallacies, and systematic morphological errors. It also presents an approach to creating more reliable morphological data by considering various specific phenomena in Central Kurdish that have not been addressed previously, such as Izafe and several enclitics.
+ 2023.sigmorphon-1.5
+ ahmadi-mahmudi-2023-revisiting
+
+
+ Investigating Phoneme Similarity with Artificially Accented Speech
+ MargotMassonUniversity College Dublin
+ JulieCarson-berndsenUniversity College Dublin
+ 49-57
+ While the deep learning revolution has led to significant performance improvements in speech recognition, accented speech remains a challenge. Current approaches to this challenge typically do not seek to understand and provide explanations for the variations of accented speech, whether they stem from native regional variation or non-native error patterns. This paper seeks to address non-native speaker variations from both a knowledge-based and a data-driven perspective. We propose to approximate non-native accented-speech pronunciation patterns by the means of two approaches: based on phonetic and phonological knowledge on the one hand and inferred from a text-to-speech system on the other. Artificial speech is then generated with a range of variants which have been captured in confusion matrices representing phoneme similarities. We then show that non-native accent confusions actually propagate to the transcription from the ASR, thus suggesting that the inference of accent specific phoneme confusions is achievable from artificial speech.
+ 2023.sigmorphon-1.6
+ masson-carson-berndsen-2023-investigating
+
+
+ Generalized Glossing Guidelines: An Explicit, Human- and Machine-Readable, Item-and-Process Convention for Morphological Annotation
+ David R.MortensenLanguage Technologies Institute, Carnegie Mellon University
+ ElaGulsenCarnegie Mellon University
+ TaiqiHeCarnegie Mellon University
+ NathanielRobinsonCarnegie Mellon University
+ JonathanAmithGettysburg College
+ LindiaTjuatjaCarnegie Mellon University
+ LoriLevinCarnegie Mellon University
+ 58-67
+ Interlinear glossing provides a vital type of morphosyntactic annotation, both for linguists and language revitalists, and numerous conventions exist for representing it formally and computationally. Some of these formats are human readable; others are machine readable. Some are easy to edit with general-purpose tools. Few represent non-concatentative processes like infixation, reduplication, mutation, truncation, and tonal overwriting in a consistent and formally rigorous way (on par with affixation). We propose an annotation convention—Generalized Glossing Guidelines (GGG) that combines all of these positive properties using an Item-and-Process (IP) framework. We describe the format, demonstrate its linguistic adequacy, and compare it with two other interlinear glossed text annotation schemes.
+ 2023.sigmorphon-1.7
+ mortensen-etal-2023-generalized
+
+
+ Jambu: A historical linguistic database for South Asian languages
+ AryamanAroraGeorgetown University
+ AdamFarrisStanford University
+ SamopriyaBasuSimon Fraser University
+ SureshKolichalaMicrosoft
+ 68-77
+ We introduce JAMBU, a cognate database of South Asian languages which unifies dozens of previous sources in a structured and accessible format. The database includes nearly 287k lemmata from 602 lects, grouped together in 23k sets of cognates. We outline the data wrangling necessary to compile the dataset and train neural models for reflex prediction on the Indo- Aryan subset of the data. We hope that JAMBU is an invaluable resource for all historical linguists and Indologists, and look towards further improvement and expansion of the database.
+ 2023.sigmorphon-1.8
+ arora-etal-2023-jambu
+
+
+ Lightweight morpheme labeling in context: Using structured linguistic representations to support linguistic analysis for the language documentation context
+ BhargavShandilyaUniversity of Colorado Boulder
+ AlexisPalmerUniversity of Colorado Boulder
+ 78-92
+ Linguistic analysis is a core task in the process of documenting, analyzing, and describing endangered and less-studied languages. In addition to providing insight into the properties of the language being studied, having tools to automatically label words in a language for grammatical category and morphological features can support a range of applications useful for language pedagogy and revitalization. At the same time, most modern NLP methods for these tasks require both large amounts of data in the language and compute costs well beyond the capacity of most research groups and language communities. In this paper, we present a gloss-to-gloss (g2g) model for linguistic analysis (specifically, morphological analysis and part-of-speech tagging) that is lightweight in terms of both data requirements and computational expense. The model is designed for the interlinear glossed text (IGT) format, in which we expect the source text of a sentence in a low-resource language, a translation of that sentence into a language of wider communication, and a detailed glossing of the morphological properties of each word in the sentence. We first produce silver standard parallel glossed data by automatically labeling the high-resource translation. The model then learns to transform source language morphological labels into output labels for the target language, mediated by a structured linguistic representation layer. We test the model on both low-resource and high-resource languages, and find that our simple CNN-based model achieves comparable performance to a state-of-the-art transformer-based model, at a fraction of the computational cost.
+ 2023.sigmorphon-1.9
+ shandilya-palmer-2023-lightweight
+
+
+ Improving Automated Prediction of English Lexical Blends Through the Use of Observable Linguistic Features
+ JaremSaundersUniversity of North Carolina at Chapel Hill
+ 93-97
+ The process of lexical blending is difficult to reliably predict. This difficulty has been shown by machine learning approaches in blend modeling, including attempts using then state-of-the-art LSTM deep neural networks trained on character embeddings, which were able to predict lexical blends given the ordered constituent words in less than half of cases, at maximum. This project introduces a novel model architecture which dramatically increases the correct prediction rates for lexical blends, using only Polynomial regression and Random Forest models. This is achieved by generating multiple possible blend candidates for each input word pairing and evaluating them based on observable linguistic features. The success of this model architecture illustrates the potential usefulness of observable linguistic features for problems that elude more advanced models which utilize only features discovered in the latent space.
+ 2023.sigmorphon-1.10
+ saunders-2023-improving
+
+
+ Colexifications for Bootstrapping Cross-lingual Datasets: The Case of Phonology, Concreteness, and Affectiveness
+ YiyiChenAalborg University
+ JohannesBjervaDepartment of Computer Science, Aalborg University
+ 98-109
+ Colexification refers to the linguistic phenomenon where a single lexical form is used to convey multiple meanings. By studying cross-lingual colexifications, researchers have gained valuable insights into fields such as psycholinguistics and cognitive sciences (Jack- son et al., 2019; Xu et al., 2020; Karjus et al., 2021; Schapper and Koptjevskaja-Tamm, 2022; François, 2022). While several multilingual colexification datasets exist, there is untapped potential in using this information to bootstrap datasets across such semantic features. In this paper, we aim to demonstrate how colexifications can be leveraged to create such cross-lingual datasets. We showcase curation procedures which result in a dataset covering 142 languages across 21 language families across the world. The dataset includes ratings of concreteness and affectiveness, mapped with phonemes and phonological features. We further analyze the dataset along different dimensions to demonstrate potential of the proposed procedures in facilitating further interdisciplinary research in psychology, cognitive science, and multilingual natural language processing (NLP). Based on initial investigations, we observe that i) colexifications that are closer in concreteness/affectiveness are more likely to colexify ; ii) certain initial/last phonemes are significantly correlated with concreteness/affectiveness intra language families, such as /k/ as the initial phoneme in both Turkic and Tai-Kadai correlated with concreteness, and /p/ in Dravidian and Sino-Tibetan correlated with Valence; iii) the type-to-token ratio (TTR) of phonemes are positively correlated with concreteness across several language families, while the length of phoneme segments are negatively correlated with concreteness; iv) certain phonological features are negatively correlated with concreteness across languages. The dataset is made public online for further research.
+ 2023.sigmorphon-1.11
+ chen-bjerva-2023-colexifications
+
+
+ Character alignment methods for dialect-to-standard normalization
+ YvesScherrerUniversity of Helsinki
+ 110-116
+ This paper evaluates various character alignment methods on the task of sentence-level standardization of dialect transcriptions. We compare alignment methods from different scientific traditions (dialectometry, speech processing, machine translation) and apply them to Finnish, Norwegian and Swiss German dialect datasets. In the absence of gold alignments, we evaluate the methods on a set of characteristics that are deemed undesirable for the task. We find that trained alignment methods only show marginal benefits to simple Levenshtein distance. On this particular task, eflomal outperforms related methods such as GIZA++ or fast_align by a large margin.
+ 2023.sigmorphon-1.12
+ scherrer-2023-character
+
+
+ SIGMORPHON–UniMorph 2023 Shared Task 0: Typologically Diverse Morphological Inflection
+ OmerGoldmanBar-Ilan University
+ KhuyagbaatarBatsurenNational University of Mongolia
+ SalamKhalifaStony Brook University
+ AryamanAroraGeorgetown University
+ GarrettNicolaiUniversity of British Columbia
+ ReutTsarfatyBar-Ilan University
+ EkaterinaVylomovaUniversity of Melbourne
+ 117-125
+ The 2023 SIGMORPHON–UniMorph shared task on typologically diverse morphological inflection included a wide range of languages: 26 languages from 9 primary language families. The data this year was all lemma-split, to allow testing models’ generalization ability, and structured along the new hierarchical schema presented in (Batsuren et al., 2022). The systems submitted this year, 9 in number, showed ingenuity and innovativeness, including hard attention for explainability and bidirectional decoding. Special treatment was also given by many participants to the newly-introduced data in Japanese, due to the high abundance of unseen Kanji characters in its test set.
+ 2023.sigmorphon-1.13
+ goldman-etal-2023-sigmorphon
+
+
+ SIGMORPHON–UniMorph 2023 Shared Task 0, Part 2: Cognitively Plausible Morphophonological Generalization in Korean
+ CanaanBreissMassachusetts Institute of Technology
+ JinyoungJoUniversity of California, Los Angeles
+ 126-131
+ This paper summarises data collection and curation for Part 2 of the 2023 SIGMORPHON-UniMorph Shared Task 0, which focused on modeling speaker knowledge and generalization of a pair of interacting phonological processes in Korean. We briefly describe how modeling the generalization task could be of interest to researchers in both Natural Language Processing and linguistics, and then summarise the traditional description of the phonological processes that are at the center of the modeling challenge. We then describe the criteria we used to select and code cases of process application in two Korean speech corpora, which served as the primary learning data. We also report the technical details of the experiment we carried out that served as the primary test data.
+ 2023.sigmorphon-1.14
+ breiss-jo-2023-sigmorphon
+
+
+ Morphological reinflection with weighted finite-state transducers
+ AliceKwakUniversity of Arizona
+ MichaelHammondUniversity of Arizona
+ CheyenneWingUniversity of Arizona
+ 132-137
+ This paper describes the submission by the University of Arizona to the SIGMORPHON 2023 Shared Task on typologically diverse morphological (re-)infection. In our submission, we investigate the role of frequency, length, and weighted transducers in addressing the challenge of morphological reinflection. We start with the non-neural baseline provided for the task and show how some improvement can be gained by integrating length and frequency in prefix selection. We also investigate using weighted finite-state transducers, jump-started from edit distance and directly augmented with frequency. Our specific technique is promising and quite simple, but we see only modest improvements for some languages here.
+ 2023.sigmorphon-1.15
+ kwak-etal-2023-morphological
+
+
+ Linear Discriminative Learning: a competitive non-neural baseline for morphological inflection
+ CheonkamJeongUniversity of Arizona
+ DominicSchmitzHeinrich Heine University Düsseldorf, Germany
+ AkhileshKakolu RamaraoHeinrich-Heine-Universität Düsseldorf
+ AnnaSteinHeinrich Heine Universität
+ KevinTangHeinrich-Heine-Universität Düsseldorf
+ 138-150
+ This paper presents our submission to the SIGMORPHON 2023 task 2 of Cognitively Plausible Morphophonological Generalization in Korean. We implemented both Linear Discriminative Learning and Transformer models and found that the Linear Discriminative Learning model trained on a combination of corpus and experimental data showed the best performance with the overall accuracy of around 83%. We found that the best model must be trained on both corpus data and the experimental data of one particular participant. Our examination of speaker-variability and speaker-specific information did not explain why a particular participant combined well with the corpus data. We recommend Linear Discriminative Learning models as a future non-neural baseline system, owning to its training speed, accuracy, model interpretability and cognitive plausibility. In order to improve the model performance, we suggest using bigger data and/or performing data augmentation and incorporating speaker- and item-specifics considerably.
+ 2023.sigmorphon-1.16
+ jeong-etal-2023-linear
+
+
+ Tü-CL at SIGMORPHON 2023: Straight-Through Gradient Estimation for Hard Attention
+ LeanderGirrbachUniversity of Tübingen
+ 151-165
+ This paper describes our systems participating in the 2023 SIGMORPHON Shared Task on Morphological Inflection and in the 2023 SIGMORPHON Shared Task on Interlinear Glossing. We propose methods to enrich predictions from neural models with discrete, i.e. interpretable, information. For morphological inflection, our models learn deterministic mappings from subsets of source lemma characters and morphological tags to individual target characters, which introduces interpretability. For interlinear glossing, our models learn a shallow morpheme segmentation in an unsupervised way jointly with predicting glossing lines. Estimated segmentation may be useful when no ground-truth segmentation is available. As both methods introduce discreteness into neural models, our technical contribution is to show that straight-through gradient estimators are effective to train hard attention models.
+ 2023.sigmorphon-1.17
+ girrbach-2023-tu
+
+
+ The BGU-MeLeL System for the SIGMORPHON 2023 Shared Task on Morphological Inflection
+ GalAstrachBen Gurion University
+ YuvalPinterBen Gurion University
+ 166-170
+ This paper presents the submission by the MeLeL team to the SIGMORPHON–UniMorph Shared Task on Typologically Diverse and Acquisition-Inspired Morphological Inflection Generation Part 3: Models of Acquisition of Inflectional Noun Morphology in Polish, Estonian, and Finnish. This task requires us to produce the word form given a lemma and a grammatical case, while trying to produce the same error-rate as in children. We approach this task with a reduced-size character-based transformer model, multilingual training and an upsampling method to introduce bias.
+ 2023.sigmorphon-1.18
+ astrach-pinter-2023-bgu
+
+
+ Tü-CL at SIGMORPHON 2023: Straight-Through Gradient Estimation for Hard Attention
+ LeanderGirrbachUniversity of Tübingen
+ 171-185
+ This paper describes our systems participating in the 2023 SIGMORPHON Shared Task on Morphological Inflection and in the 2023 SIGMORPHON Shared Task on Interlinear Glossing. We propose methods to enrich predictions from neural models with discrete, i.e. interpretable, information. For morphological inflection, our models learn deterministic mappings from subsets of source lemma characters and morphological tags to individual target characters, which introduces interpretability. For interlinear glossing, our models learn a shallow morpheme segmentation in an unsupervised way jointly with predicting glossing lines. Estimated segmentation may be useful when no ground-truth segmentation is available. As both methods introduce discreteness into neural models, our technical contribution is to show that straight-through gradient estimators are effective to train hard attention models.
+ 2023.sigmorphon-1.19
+ girrbach-2023-tu-cl
+
+
+ Findings of the SIGMORPHON 2023 Shared Task on Interlinear Glossing
+ MichaelGinnUniversity of Colorado Boulder
+ SarahMoellerUniversity of Florida
+ AlexisPalmerUniversity of Colorado Boulder
+ AnnaStaceyUniversity of British Columbia
+ GarrettNicolaiUniversity of British Columbia
+ MansHuldenUniversity of Colorado Boulder
+ MiikkaSilfverbergUniversity of British Columbia
+ 186-201
+ This paper presents the findings of the SIGMORPHON 2023 Shared Task on Interlinear Glossing. This first iteration of the shared task explores glossing of a set of six typologically diverse languages: Arapaho, Gitksan, Lezgi, Natügu, Tsez and Uspanteko. The shared task encompasses two tracks: a resource-scarce closed track and an open track, where participants are allowed to utilize external data resources. Five teams participated in the shared task. The winning team Tü-CL achieved a 23.99%-point improvement over a baseline RoBERTa system in the closed track and a 17.42%-point improvement in the open track.
+ 2023.sigmorphon-1.20
+ ginn-etal-2023-findings
+
+
+ LISN @ SIGMORPHON 2023 Shared Task on Interlinear Glossing
+ ShuOkabeLISN/CNRS, Université Paris-Saclay
+ FrançoisYvonISIR CNRS & Sorbonne Université
+ 202-208
+ This paper describes LISN”’“s submission to the second track (open track) of the shared task on Interlinear Glossing for SIGMORPHON 2023. Our systems are based on Lost, a variation of linear Conditional Random Fields initially developed as a probabilistic translation model and then adapted to the glossing task. This model allows us to handle one of the main challenges posed by glossing, i.e. the fact that the list of potential labels for lexical morphemes is not fixed in advance and needs to be extended dynamically when labelling units are not seen in training. In such situations, we show how to make use of candidate lexical glosses found in the translation and discuss how such extension affects the training and inference procedures. The resulting automatic glossing systems prove to yield very competitive results, especially in low-resource settings.
+ 2023.sigmorphon-1.21
+ okabe-yvon-2023-lisn
+
+
+ SigMoreFun Submission to the SIGMORPHON Shared Task on Interlinear Glossing
+ TaiqiHeCarnegie Mellon University
+ LindiaTjuatjaCarnegie Mellon University
+ NathanielRobinsonCarnegie Mellon University
+ ShinjiWatanabeCarnegie Mellon University
+ David R.MortensenLanguage Technologies Institute, Carnegie Mellon University
+ GrahamNeubigCarnegie Mellon University
+ LoriLevinCarnegie Mellon University
+ 209-216
+ In our submission to the SIGMORPHON 2023 Shared Task on interlinear glossing (IGT), we explore approaches to data augmentation and modeling across seven low-resource languages. For data augmentation, we explore two approaches: creating artificial data from the provided training data and utilizing existing IGT resources in other languages. On the modeling side, we test an enhanced version of the provided token classification baseline as well as a pretrained multilingual seq2seq model. Additionally, we apply post-correction using a dictionary for Gitksan, the language with the smallest amount of data. We find that our token classification models are the best performing, with the highest word-level accuracy for Arapaho and highest morpheme-level accuracy for Gitksan out of all submissions. We also show that data augmentation is an effective strategy, though applying artificial data pretraining has very different effects across both models tested.
+ 2023.sigmorphon-1.22
+ he-etal-2023-sigmorefun
+
+
+ An Ensembled Encoder-Decoder System for Interlinear Glossed Text
+ EdithCoatesUBC Mathematics
+ 217-221
+ This paper presents my submission to Track 1 of the 2023 SIGMORPHON shared task on interlinear glossed text (IGT). There are a wide amount of techniques for building and training IGT models (see Moeller and Hulden, 2018; McMillan-Major, 2020; Zhao et al., 2020). I describe my ensembled sequence-to-sequence approach, perform experiments, and share my submission’s test-set accuracy. I also discuss future areas of research in low-resource token classification methods for IGT.
+ 2023.sigmorphon-1.23
+ coates-2023-ensembled
+
+
+ Glossy Bytes: Neural Glossing using Subword Encoding
+ ZiggyCrossUniversity of British Columbia
+ MichelleYunUniversity of British Columbia
+ AnanyaApparajuUniversity of British Columbia
+ JataMacCabeUniversity of British Columbia
+ GarrettNicolaiUniversity of British Columbia
+ MiikkaSilfverbergUniversity of British Columbia
+ 222-229
+ This paper presents several different neural subword modelling based approaches to interlinear glossing for seven under-resourced languages as a part of the 2023 SIGMORPHON shared task on interlinear glossing. We experiment with various augmentation and tokenization strategies for both the open and closed tracks of data. We found that while byte-level models may perform well for greater amounts of data, character based approaches remain competitive in their performance in lower resource settings.
+ 2023.sigmorphon-1.24
+ cross-etal-2023-glossy
+
+
+ The SIGMORPHON 2022 Shared Task on Cross-lingual and Low-Resource Grapheme-to-Phoneme Conversion
+ Arya D.McCarthyJohns Hopkins University
+ Jackson L.Lee
+ AlexandraDeLuciaJohns Hopkins University
+ TravisBartleyCity University of New York
+ MilindAgarwalGeorge Mason University
+ Lucas F.E.AshbyCity University of New York
+ LucaDel SignoreCity University of New York
+ CameronGibsonCity University of New York
+ ReubenRaffCity University of New York
+ WinstonWuUniversity of Michigan
+ 230-238
+ Grapheme-to-phoneme conversion is an important component in many speech technologies, but until recently there were no multilingual benchmarks for this task. The third iteration of the SIGMORPHON shared task on multilingual grapheme-to-phoneme conversion features many improvements from the previous year’s task (Ashby et al., 2021), including additional languages, three subtasks varying the amount of available resources, extensive quality assurance procedures, and automated error analyses. Three teams submitted a total of fifteen systems, at best achieving relative reductions of word error rate of 14% in the crosslingual subtask and 14% in the very-low resource subtask. The generally consistent result is that cross-lingual transfer substantially helps grapheme-to-phoneme modeling, but not to the same degree as in-language examples.
+ 2023.sigmorphon-1.27
+ mccarthy-etal-2023-sigmorphon
+
+
+ SIGMORPHON 2022 Shared Task on Grapheme-to-Phoneme Conversion Submission Description: Sequence Labelling for G2P
+ LeanderGirrbachThe University of Tübingen
+ 239-244
+ This paper describes our participation in the Third SIGMORPHON Shared Task on Grapheme-to-Phoneme Conversion (Low-Resource and Cross-Lingual) (McCarthy et al.,2022). Our models rely on different sequence labelling methods. The main model predicts multiple phonemes from each grapheme and is trained using CTC loss (Graves et al., 2006). We find that sequence labelling methods yield worse performance than the baseline when enough data is available, but can still be used when very little data is available. Furthermore, we demonstrate that alignments learned by the sequence labelling models can be easily inspected.
+ 2023.sigmorphon-1.28
+ girrbach-2023-sigmorphon
+
+
+ Low-resource grapheme-to-phoneme mapping with phonetically-conditioned transfer
+ MichaelHammondThe University of Arizona
+ 245-248
+ In this paper we explore a very simple nonneural approach to mapping orthography to phonetic transcription in a low-resource context with transfer data from a related language. We start from a baseline system and focus our efforts on data augmentation. We make three principal moves. First, we start with an HMMbased system (Novak et al., 2012). Second, we augment our basic system by recombining legal substrings in restricted fashion (Ryan and Hulden, 2020). Finally, we limit our transfer data by only using training pairs where the phonetic form shares all bigrams with the target language.
+ 2023.sigmorphon-1.29
+ hammond-2023-low
+
+
+ A future for universal grapheme-phoneme transduction modeling with neuralized finite-state transducers
+ Chu-Cheng LinLinJohns Hopkins University
+ 249-249
+ We propose a universal grapheme-phoneme transduction model using neuralized finite-state transducers. Many computational models of grapheme-phoneme transduction nowadays are based on the (autoregressive) sequence-to-sequence string transduction paradigm. While such models have achieved state-of-the-art performance, they suffer from theoretical limitations of autoregressive models. On the other hand, neuralized finite-state transducers (NFSTs) have shown promising results on various string transduction tasks. NFSTs can be seen as a generalization of weighted finite-state transducers (WFSTs), and can be seen as pairs of a featurized finite-state machine (‘marked finite-state transducer’ or MFST in NFST terminology), and a string scoring function. Instead of taking a product of local contextual feature weights on FST arcs, NFSTs can employ arbitrary scoring functions to weight global contextual features of a string transduction, and therefore break the Markov property. Furthermore, NFSTs can be formally shown to be more expressive than (autoregressive) seq2seq models. Empirically, joint grapheme-phoneme transduction NFSTs have consistently outperformed vanilla seq2seq models on grapheme-tophoneme and phoneme-to-grapheme transduction tasks for English. Furthermore, they provide interpretable aligned string transductions, thanks to their finite-state machine component. In this talk, we propose a multilingual extension of the joint grapheme-phoneme NFST. We achieve this goal by modeling typological and phylogenetic features of languages and scripts as optional latent variables using a finite-state machine. The result is a versatile graphemephoneme transduction model: in addition to standard monolingual and multilingual transduction, the proposed multilingual NFST can also be used in various controlled generation scenarios, such as phoneme-to-grapheme transduction of an unseen language-script pair. We also plan to release an NFST software package.
+ 2023.sigmorphon-1.30
+ lin-2023-future
+
+
+ Fine-tuning mSLAM for the SIGMORPHON 2022 Shared Task on Grapheme-to-Phoneme Conversion
+ DanGarretteGoogle Research
+ 250-250
+ Grapheme-to-phoneme (G2P) conversion is a task that is inherently related to both written and spoken language. Therefore, our submission to the G2P shared task builds off of mSLAM (Bapna et al., 2022), a 600M parameter encoder model pretrained simultaneously on text from 101 languages and speech from 51 languages. For fine-tuning a G2P model, we combined mSLAM’s text encoder, which uses characters as its input tokens, with an uninitialized single-layer RNN-T decoder (Graves, 2012) whose vocabulary is the set of all 381 phonemes appearing in the shared task data. We took an explicitly multilingual approach to modeling the G2P tasks, fine-tuning and evaluating a single model that covered all the languages in each task, and adding language codes as prefixes to the input strings as a means of specifying the language of each example. Our models perform well in the shared task’s “high” setting (in which they were trained on 1,000 words from each language), though they do poorly in the “low” task setting (training on only 100 words from each language). Our models also perform reasonably in the “mixed” setting (training on 100 words in the target language and 1000 words in a related language), hinting that mSLAM’s multilingual pretraining may be enabling useful cross-lingual sharing.
+ 2023.sigmorphon-1.31
+ garrette-2023-fine
+
+
+
diff --git a/data/xml/2023.sustainlp.xml b/data/xml/2023.sustainlp.xml
new file mode 100644
index 0000000000..77925ede03
--- /dev/null
+++ b/data/xml/2023.sustainlp.xml
@@ -0,0 +1,248 @@
+
+
+
+
+ Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
+ NafiseSadat Moosavi
+ IrynaGurevych
+ YufangHou
+ GyuwanKim
+ Young JinKim
+ TalSchuster
+ AmeetaAgrawal
+ Association for Computational Linguistics
+ Toronto, Canada (Hybrid)
+ July
+ 2023
+ 2023.sustainlp-1
+ sustainlp
+
+
+ 2023.sustainlp-1.0
+ sustainlp-2023-simple
+
+
+ KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals
+ SandeepSilwalMIT
+ SaraAhmadianGoogle Research
+ AndrewNystromGoogle AI
+ AndrewMccallumUMass Amherst
+ DeepakRamachandranGoogle Research
+ MehranKazemiGoogle Research
+ 1-31
+ 2023.sustainlp-1.1
+ silwal-etal-2023-kwikbucks
+
+
+ Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
+ YanchenLiuHarvard University
+ TimoSchickMeta AI
+ HinrichSchtzeCenter for Information and Language Processing, University of Munich
+ 32-38
+ 2023.sustainlp-1.2
+ liu-etal-2023-semantic
+
+
+ oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
+ DanielCamposUniversity of Illinois Urbana Champaign
+ AlexandreMarquesNeural Magic
+ MarkKurtzNeural Magic
+ ChengXiang ZhaiUniversity of Illinois Urbana Champaign
+ 39-58
+ 2023.sustainlp-1.3
+ campos-etal-2023-oberta
+
+
+ Quick Dense Retrievers Consume KALE: Post Training KullbackLeibler Alignment of Embeddings for Asymmetrical dual encoders
+ DanielCamposUniversity of Illinois Urbana Champaign
+ AlessandroMagnaniWalmart Labs
+ ChengxiangZhaiUniversity of Illinois Urbana Champaign
+ 59-77
+ 2023.sustainlp-1.4
+ campos-etal-2023-quick
+
+
+ Lessons on Parameter Sharing across Layers in Transformers
+ ShoTakaseLINE Corporation
+ ShunKiyonoLINE Corporation
+ 78-90
+ 2023.sustainlp-1.5
+ takase-kiyono-2023-lessons
+
+
+ To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
+ DanielCamposUniversity of Illinois Urbana Champaign
+ ChengxiangZhaiUniversity of Illinois Urbana Champaign
+ 91-109
+ 2023.sustainlp-1.6
+ campos-zhai-2023-asymmetry
+
+
+ Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning
+ DantongLiuAmazon
+ KaushikPavaniAmazon
+ SunnyDasguptaAmazon
+ 110-120
+ 2023.sustainlp-1.7
+ liu-etal-2023-small
+
+
+ ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models
+ AdityaShahVirginia Tech
+ SurendrabikramThapaVirginia Tech
+ AneeshJainVirginia Tech
+ LifuHuangVirginia Tech
+ 121-128
+ 2023.sustainlp-1.8
+ shah-etal-2023-adept
+
+
+ NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification Tasks
+ Jean-michelAttenduNuance Communications
+ Jean-philippeCorbeilNuance Communications
+ 129-146
+ 2023.sustainlp-1.9
+ attendu-corbeil-2023-nlu
+
+
+ On the Interactions of Structural Constraints and Data Resources for Structured Prediction
+ ZhisongZhangCarnegie Mellon University
+ EmmaStrubellCarnegie Mellon University
+ EduardHovyUniversity of Melbourne
+ 147-157
+ 2023.sustainlp-1.10
+ zhang-etal-2023-interactions
+
+
+ Can we Pretrain a SotA Legal Language Model on a Budget From Scratch?
+ JoelNiklausUniversity of Bern
+ DanieleGiofreThomson Reuters
+ 158-182
+ 2023.sustainlp-1.11
+ niklaus-giofre-2023-pretrain
+
+
+ Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering
+ ChenyangLyuDublin City University
+ TianboJiNantong University
+ YvetteGrahamADAPT, Trinity College Dublin
+ JenniferFosterDublin City University
+ 183-189
+ 2023.sustainlp-1.12
+ lyu-etal-2023-video
+
+
+ How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
+ XinXuZhejiang University
+ YuqiZhuZhejiang University
+ XiaohanWangZhejiang University
+ NingyuZhangZhejiang University
+ 190-200
+ 2023.sustainlp-1.13
+ xu-etal-2023-unleash
+
+
+ Prompting language models improves performance in imbalanced setting
+ JayMohtaAmazon
+ 201-211
+ 2023.sustainlp-1.14
+ mohta-2023-prompting
+
+
+ KGQA Without Retraining
+ NickMckennaUniversity of Edinburgh, School of Informatics
+ PriyankaSenAmazon
+ 212-218
+ 2023.sustainlp-1.15
+ mckenna-sen-2023-kgqa
+
+
+ MANER: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages
+ ShashankSonkarRice University
+ ZichaoWangRice University
+ RichardBaraniukRice University
+ 219-226
+ 2023.sustainlp-1.16
+ sonkar-etal-2023-maner
+
+
+ Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
+ PeggyTangThe University of Sydney
+ JunbinGaoThe University of Sydney
+ LeiZhangInternational Digital Economy Academy (IDEA)
+ ZhiyongWangThe University of Sydney
+ 227-238
+ 2023.sustainlp-1.17
+ tang-etal-2023-efficient
+
+
+ Exploring the Effect of Frequency Resolution in FNet
+ GregorySzumelDuke University
+ GhazalKhalighinejadDuke University
+ RickardStureborgDuke University
+ SamWisemanDuke University
+ 239-244
+ 2023.sustainlp-1.18
+ szumel-etal-2023-exploring
+
+
+ Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory
+ AlikiAnagnostopoulouCarl von Ossietzky University of Oldenburg / German Research Center for Artificial Intelligence
+ MareikeHartmannSaarland University / German Research Center for Artificial Intelligence
+ DanielSonntagCarl von Ossietzky University of Oldenburg / German Research Center for Artificial Intelligence
+ 245-256
+ 2023.sustainlp-1.19
+ anagnostopoulou-etal-2023-towards
+
+
+ Corpus Complexity Matters in Pretraining Language Models
+ AmeetaAgrawalPortland State University
+ SureshSinghPortland State University
+ 257-263
+ 2023.sustainlp-1.20
+ agrawal-singh-2023-corpus
+
+
+ PersonaPKT: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer
+ XuHanUniversity of Colorado Boulder
+ BinGuoAmazon.com
+ YoonJungAmazon
+ BenjaminYaoAmazon
+ YuZhangAmazon.com
+ XiaohuLiuAmazon
+ ChenleiGuoAmazon
+ 264-273
+ 2023.sustainlp-1.21
+ han-etal-2023-personapkt
+
+
+ Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints
+ GaneshJawaharThe University of British Columbia
+ SubhabrataMukherjeeMicrosoft Research
+ DebadeeptaDeyMicrosoft Research
+ MuhammadAbdul-mageedThe University of British Columbia
+ LaksLakshmanan, V.s.UBC
+ CaioMendesMicrosoft
+ GustavoDe RosaMicrosoft Research
+ ShitalShahMicrosoft Research
+ 274-289
+ 2023.sustainlp-1.22
+ jawahar-etal-2023-small
+
+
+ Query Encoder Distillation via Embedding Alignment is a Strong Baseline Method to Boost Dense Retriever Online Efficiency
+ YuxuanWangUniversity of Pennsylvania
+ LyuHongUniversity of Pennsylvania
+ 290-298
+ 2023.sustainlp-1.23
+ wang-hong-2023-query
+
+
+ Minimalist Entity Disambiguation for Mid-Resource Languages
+ BennoKruitVU Amsterdam
+ 299-306
+ 2023.sustainlp-1.24
+ kruit-2023-minimalist
+
+
+
diff --git a/data/xml/2023.ws.xml b/data/xml/2023.ws.xml
index 8dda476745..74c40991ee 100644
--- a/data/xml/2023.ws.xml
+++ b/data/xml/2023.ws.xml
@@ -31,6 +31,14 @@
2023.wnu-1
2023.semeval-1
2023.woah-1
+ 2023.cawl-1
+ 2023.clinicalnlp-1
+ 2023.repl4nlp-1
+ 2023.nlrse-1
+ 2023.sustainlp-1
+ 2023.dialdoc-1
+ 2023.sicon-1
+ 2023.americasnlp-1
diff --git a/data/yaml/sigs/sigmorphon.yaml b/data/yaml/sigs/sigmorphon.yaml
index ed92704442..6792211be0 100644
--- a/data/yaml/sigs/sigmorphon.yaml
+++ b/data/yaml/sigs/sigmorphon.yaml
@@ -2,6 +2,8 @@ Name: Special Interest Group on Computational Morphology and Phonology (SIGMORPH
ShortName: SIGMORPHON
URL: https://sigmorphon.github.io/
Meetings:
+ - 2023:
+ - 2023.sigmorphon-1
- 2022:
- 2022.sigmorphon-1
- 2021:
diff --git a/data/yaml/venues/cawl.yaml b/data/yaml/venues/cawl.yaml
new file mode 100644
index 0000000000..4adcd21731
--- /dev/null
+++ b/data/yaml/venues/cawl.yaml
@@ -0,0 +1,2 @@
+acronym: CAWL
+name: Workshop on Computation and Written Language (CAWL)
diff --git a/data/yaml/venues/nlrse.yaml b/data/yaml/venues/nlrse.yaml
new file mode 100644
index 0000000000..d5b63f0d8f
--- /dev/null
+++ b/data/yaml/venues/nlrse.yaml
@@ -0,0 +1,3 @@
+acronym: NLRSE
+is_acl: true
+name: Workshop on Natural Language Reasoning and Structured Explanations
diff --git a/data/yaml/venues/sicon.yaml b/data/yaml/venues/sicon.yaml
new file mode 100644
index 0000000000..0df0de3891
--- /dev/null
+++ b/data/yaml/venues/sicon.yaml
@@ -0,0 +1,2 @@
+acronym: SICon
+name: Workshop on Social Influence in Conversations