Skip to content

Commit

Permalink
Paper Metadata: 2024.acl-long.622 (#3863)
Browse files Browse the repository at this point in the history
  • Loading branch information
zolastro authored Sep 11, 2024
1 parent 514b730 commit b21f1f1
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -8083,12 +8083,12 @@
<title>Split and Rephrase with Large Language Models</title>
<author><first>David</first><last>Ponce</last><affiliation>Vicomtech</affiliation></author>
<author><first>Thierry</first><last>Etchegoyhen</last><affiliation>Vicomtech</affiliation></author>
<author><first>Jesus Javier</first><last>Calleja Perez</last><affiliation>Universidad del País Vasco and Vicomtech</affiliation></author>
<author><first>Jesús</first><last>Calleja</last><affiliation>Universidad del País Vasco and Vicomtech</affiliation></author>
<author><first>Harritxu</first><last>Gete</last><affiliation>University of the Basque Country and Vicomtech Foundation</affiliation></author>
<pages>11588-11607</pages>
<abstract>The Split and Rephrase (SPRP) task, which consists in splitting complex sentences into a sequence of shorter grammatical sentences, while preserving the original meaning, can facilitate the processing of complex texts for humans and machines alike. It is also a valuable testbed to evaluate natural language processing models, as it requires modelling complex grammatical aspects. In this work, we evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics, although still lagging in terms of splitting compliance. Results from two human evaluations further support the conclusions drawn from automated metric results. We provide a comprehensive study that includes prompting variants, domain shift, fine-tuned pretrained language models of varying parameter size and training data volumes, contrasted with both zero-shot and few-shot approaches on instruction-tuned language models. Although the latter were markedly outperformed by fine-tuned models, they may constitute a reasonable off-the-shelf alternative. Our results provide a fine-grained analysis of the potential and limitations of large language models for SPRP, with significant improvements achievable using relatively small amounts of training data and model parameters overall, and remaining limitations for all models on the task.</abstract>
<url hash="b0e00e10">2024.acl-long.622</url>
<bibkey>ponce-martinez-etal-2024-split</bibkey>
<bibkey>ponce-etal-2024-split</bibkey>
</paper>
<paper id="623">
<title><fixed-case>C</fixed-case>hunk<fixed-case>A</fixed-case>ttention: Efficient Self-Attention with Prefix-Aware <fixed-case>KV</fixed-case> Cache and Two-Phase Partition</title>
Expand Down

0 comments on commit b21f1f1

Please sign in to comment.