diff --git a/data/xml/2023.bea.xml b/data/xml/2023.bea.xml index 812037bf33..5421f090f7 100644 --- a/data/xml/2023.bea.xml +++ b/data/xml/2023.bea.xml @@ -699,14 +699,14 @@ <fixed-case>RETUYT</fixed-case>-<fixed-case>I</fixed-case>n<fixed-case>C</fixed-case>o at <fixed-case>BEA</fixed-case> 2023 Shared Task: Tuning Open-Source <fixed-case>LLM</fixed-case>s for Generating Teacher Responses - AlexisBaladnInstituto de Computacin, Facultad de Ingeniera, Universidad de la Repblica - IgnacioSastreInstituto de Computacin, Facultad de Ingeniera, Universidad de la Repblica - LuisChiruzzoInstituto de Computacin, Facultad de Ingeniera, Universidad de la Repblica - AialaRosInstituto de Computacin, Facultad de Ingeniera, Universidad de la Repblica + AlexisBaladónInstituto de Computación, Facultad de Ingeniería, Universidad de la República + IgnacioSastreInstituto de Computación, Facultad de Ingeniería, Universidad de la República + LuisChiruzzoInstituto de Computación, Facultad de Ingeniería, Universidad de la República + AialaRosáInstituto de Computación, Facultad de Ingeniería, Universidad de la República 756-765 This paper presents the results of our participation in the BEA 2023 shared task, which focuses on generating AI teacher responses in educational dialogues. We conducted experiments using several Open-Source Large Language Models (LLMs) and explored fine-tuning techniques along with prompting strategies, including Few-Shot and Chain-of-Thought approaches. Our best model was ranked 4.5 in the competition with a BertScore F1 of 0.71 and a DialogRPT final (avg) of 0.35. Nevertheless, our internal results did not exactly correlate with those obtained in the competition, which showed the difficulty in evaluating this task. Other challenges we faced were data leakage on the train set and the irregular format of the conversations. 2023.bea-1.61 - baladn-etal-2023-retuyt + baladon-etal-2023-retuyt Empowering Conversational Agents using Semantic In-Context Learning