From aa3fe25bda341eee04ac5d505a84a6bf8f271fa2 Mon Sep 17 00:00:00 2001 From: acl-pwc-bot <94475230+acl-pwc-bot@users.noreply.github.com> Date: Thu, 13 Jul 2023 03:46:39 +0200 Subject: [PATCH 1/2] Update metadata from Papers with Code --- data/xml/2020.aacl.xml | 2 +- data/xml/2020.emnlp.xml | 3 ++- data/xml/2020.findings.xml | 1 + data/xml/2021.acl.xml | 1 + data/xml/2021.findings.xml | 1 + data/xml/2021.naacl.xml | 1 + data/xml/2022.findings.xml | 1 + data/xml/2022.lrec.xml | 2 +- data/xml/2022.naacl.xml | 1 + data/xml/2022.sdp.xml | 1 + data/xml/2023.acl.xml | 3 +++ data/xml/D18.xml | 1 + data/xml/D19.xml | 1 + data/xml/N19.xml | 2 ++ data/xml/P19.xml | 1 + data/xml/W19.xml | 3 ++- 16 files changed, 21 insertions(+), 4 deletions(-) diff --git a/data/xml/2020.aacl.xml b/data/xml/2020.aacl.xml index 395dc1fc60..e1d3800ffe 100644 --- a/data/xml/2020.aacl.xml +++ b/data/xml/2020.aacl.xml @@ -1549,7 +1549,7 @@ We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows fairseq’s careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based as well as Transformer-based models and open-source detailed training recipes. Fairseq’s machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. Fairseq S2T is available at https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text. 2020.aacl-demo.6 wang-etal-2020-fairseq - pytorch/fairseq + pytorch/fairseq LibriSpeech MuST-C diff --git a/data/xml/2020.emnlp.xml b/data/xml/2020.emnlp.xml index cdffea717b..8f1296d816 100644 --- a/data/xml/2020.emnlp.xml +++ b/data/xml/2020.emnlp.xml @@ -5911,6 +5911,7 @@ 10.18653/v1/2020.emnlp-main.392